Since 9/11, there's been a renewed interest in facial recognition algorithm to catch terrorists trying to slip into the country but ten years later, the system ain't anywhere close to perfect (yes, even including Facebook's creepy facial recognition system)
Perhaps they're going about it the wrong way, according to Ben Austen of Wired. Rather than taking biometric measurements of the size of a person's nose or eyes, computers would do well to learn from caricaturists instead:
Did you hear the one about the vision scientist who used only caricaturists as his test subjects? He exaggerated his findings! Pawan Sinha, director of MIT’s Sinha Laboratory for Vision Research, and one of the nation’s most innovative computer-vision researchers, knows that caricatures are meant to be humorous, grotesque, and outlandish—he dabbles as a caricaturist himself, drawing occasionally for university publications. But Sinha also contends that these simple, exaggerated drawings can be objectively and systematically studied and that such work will lead to breakthroughs in our understanding of both human and machine-based vision. His lab at MIT is preparing to computationally analyze hundreds of caricatures this year, from dozens of different artists, with the hope of tapping their intuitive knowledge of what is and isn’t crucial for recognition. He has named this endeavor the Hirschfeld Project, after the famous New York Times caricaturist Al Hirschfeld.
Quite simply, the Hirschfeld Project would reverse-engineer the caricaturist’s art. By analyzing sketches, Sinha hopes to pinpoint the recurring exaggerations in the caricatures that most strongly correlate to observable deviations in the original faces. The results, he believes, will ultimately produce a rank-ordered list of the 20 or so facial attributes that are most important for recognition: “It’s a recipe for how to encode the face,” he says. In preliminary tests, the lab has already isolated what seem to be important ingredients—for example, the ratio of the height of the forehead to the distance between the top of the nose and the mouth.
Well, duh. Otherwise how would one recognize a person in miniature or enlarged photographs?
If it was neurocomputationally viable to recognize absolute metrics it should probably work as good, if not better than using eigenvalues.
Do you even know what I'm talking about?
Okay, as long as you are aware that you are saying "Duh" to something that computational and representational neuroscientists actually debate as if it wasn't so obvious. Something which Neurophilosopher and Eliminativist Paul Churchland felt was necessary to include in "The Engine of Reason; The Seat of the Soul", which is the Valtz Chair of Philosophy (UCSD) professor's most noted work.
I think your second statement is a bit wrong, because neither butter nor bread map very well onto absolute metrics versus eigenvalues. The comparison is off because butter and bread or so dramatically different as to never approximate the same results at all, whereas a facial-recognition system that was made with either absolute metrics or eigenvalues would be indistinguishable to the untrained observer.
The latent point I'm making is that no such thing as absolute metrics objectively exist. It is not just facial-recognition but all feature-detection systems must perform some kind of relative comparison rather than grasping for absolute values. But if you look up "eigenfaces" on the intarwebs, you are going to see exactly what I'm talking about. There is nothing of conscious significance in these images; we do not see that one person has a larger nose than another, the differences are far more subtle than anything we consciously acknowledge.
I got that the first time. My response is "Duh" because it would be pretty useless.
I'm sorry you had to read a fancy paper to figure that out.
Let me guess; you learned the english language without ever having to hear it?
Sorry if I made you feel bad.
I don't really feel special sharing it. But not everybody is a "superior-brained AI" and some people need to be exposed to things before they can absorb them. Kind of like theories of learning and memory in which it is generally held that NOTHING we think is original, but all of it is learned from prior experiences. David Hume's Inquiry Concerning Human Understanding comes to mind.
"We are apt to imagine that we could discover these effects by the mere operation of our reason, without experience. We fancy, that were we brought on a sudden into this world, we could at first have inferred that one Billiard-ball would communicate motion to another upon impulse; and that we needed not to have waited for the event, in order to pronounce with certainty concerning it. Such is the influence of custom, that, where it is strongest, it not only covers our natural ignorance, but even conceals itself, and seems not to take place, merely because it is found in the highest degree." - David Hume, An Enquiry Concerning Human Understanding
Full Text: http://www.gutenberg.org/dirs/etext06/8echu10h.htm
I'm getting bored, so I'm afraid I won't be reading your next post. Feel free to refer to another obsolete philosophy if you like.