Earlier this evening, Google released this statement on Google+
Glass and Facial Recognition
When we started the Explorer Program nearly a year ago our goal was simple: we wanted to make people active participants in shaping the future of this technology ahead of a broader consumer launch. We’ve been listening closely to you, and many have expressed both interest and concern around the possibilities of facial recognition in Glass. As Google has said for several years, we won’t add facial recognition features to our products without having strong privacy protections in place. With that in mind, we won’t be approving any facial recognition Glassware at this time.
We’ve learned a lot from you in just a few weeks and we’ll continue to learn more as we update the software and evolve our policies in the weeks and months ahead.
My first question is, without a Glass “Market” of any sort, how is your approval relevant? So far the only apps I have seen that got the Google stamp of approval were big, official apps like New York Times, Twitter, Facebook and such. Even though Glass Tweet was one of the first pieces of Glassware available, it still has not been “reviewed by Google”. Nor has our own Glass to Facebook, even though it was available weeks ahead of the official Facebook Glassware. Nor has Glassnost, an independant app and photo-sharing community. In fact, it looks like the only Glassware Google is “approving” is from the big guys that help legitimize Glass!
Why do I make such accusations? I am an Android developer. I’ve spent the last few days at AnDevCon IV, the Android Developer Conference, and I attended 2 talks on privacy law given by Adam D. H. Grant of the Alpert, Barr and Grant law firm.
The permissions that you agree to when you download an app have the ability to do more than just operate the app. For instance an app that helps you find your car can also tell advertizers what locations you frequent. Now you’ll get ads that are more targeted to your movements.
It gets worse. Much worse. Here’s a slide from the presentation listing the data apps can collect:
Biometrics? Here’s a link for more information, but long story short, it means identifying you by your appearance. An app that has permission to access your photos can send them to the developer to sell to the highest bidder. Who would pay for your photos? Advertizers who want to know everything about you to so they can custom tailor ads to your every need.
But, frighteningly enough, anyone can buy those photos… even those who don’t have such wholesome intentions. I have about 10 photo editing apps on my phone, and I’m scared.
And then there’s #4 to worry about. And #5. And it’s legal!
The whole purpose of the presentation was to educate the group on how to CYA when gathering “PII” (Personally Identifiable Information). As long as the developer discloses their intention in the app, they are acting within the law.
And then there are people like me who assumed that such things could not even be possible. Alas, WRONG!
The speaker probably thought my outrage was sweet and naive. He laughed when I said “Who would do that to their customers? That’s dishonorable!” But heck, the whole thing really upsets me. For one thing, it makes developers like me look bad! For another, many of my compeditors are making big money with these horrible practices and are using that money to get a market advantage with advertizing.
So, all things considered, why is everyone so worried about Google Glass’s potential for facial recognition? Perhaps it’s a diversion. Create a stir around Glass so nobody will pay attention while the relavent privacy laws are being decided…?
You see, the APPS Act of 2013 is less than a month old. Here is Congressman Hank Johnson introducing it:
So maybe the people who make millions from advertizing want the public to be outraged about a new, futuristic device so maybe they won’t get concerned about their old, familiar cell phones. After all, our lives are on our phones. We capture moments with pictures, keep our records and contacts on it, we have it within reach 24 hours a day!
Kinda like finding a huge spider living under your bed.
Here is a quick link to the Glass sessions at Google I/O 2013:
Or you can just visit this link:
…to see the 4 sessions individually.
The Google Glass prism is a fascinating piece of technology, but how does it work? Read on and learn more…
Martin Missfeldt created this infographic to describe how the Glass prism works. The graphic is from February, before Glass was available to anyone outside Google. Martin’s theories were based on the Glass patent and various other sources listed at the bottom of the graphic. However, I suspect he did not have the benefit of seeing a good photo of the Glass prism (like mine) which might account for this major oversight.
This image has been posted hundreds of thousands of times (according to a Google image search), even on popular tech sites like Mashable, but has anyone noticed this?
The angle of the screen is backwards!
The relationship between the prism and the eye is not really accurate either. Here’s a photo of me looking through Glass. Compare it to the illustration above. The screen was positioned for best visibility before the photo was taken.
Perhaps it was mirrored to deflect sunlight that would otherwise diffuse the image on the screen. Or maybe it’s so inquisitive people like me can’t look into the projector. But that doesn’t explain why that surface is convex.
So how does the prism work?
Well, there’s a reason I started by saying “Read on and learn more” instead of “Here’s the answer”. There are currently several players in the wearable tech game and I don’t think Google wants to expose their hand quite yet. I’m sure many companies hope to copy the Glass prism, and while imitation may be the highest form of flattery, flattery doesn’t win the game.
Here are some new abstract macro shots of Glass. These are variations of the same view – some with more detail, some with a narrower depth of field. I couldn’t decide which was best so I figured I’d post all 4 of my favorites.
Prezi is great if you have a presentation to do. Their 3-D backgrounds are even more entertaining, but I need to work on my fonts to get them to stand out from the custom background I created, so I haven’t done that yet.
This is an excerpt from my work-in-progress Prezi about Glass. This emulates the Glass UI in a really terrific way that other portals don’t capture. Check it out!
Here are some new Macro shots that show more of the structural details of the Glass prism.
Check the Macros page for more great Glass shots.
Earlier this week we saw the first Glass app to use facial recognition, called MedRef for Glass and developed by Lance Nanek. In my research, I found that MedRef uses a web service for its facial recognition, Betafaceapi.com. Betafaceapi has a demo on their site where you can upload your own photos:
So I gave it a whirl. I uploaded my Google+ profile photo along with another two photos and got some interesting results.
The software maps your face and returns some of data. When I went to the “Recognition” tab and clicked “Compare with celebrities” I received an assortment of photos of actresses who apparently look like me, at least according to this software. I believe the multi-colored bar across the bottom of the photo indicates how strong the resemblance is. Then, when I clicked on “Compare with detected faces”, I got the results of comparing my three photos with the one photo I had clicked on.
Of course the software picks up a strong resemblance between the two identical pictures. However, it seems my resemblance to these other two photos of me is weak. The bar doesn’t even make it over to the green.
Yes, apparently this facial recognition software believes I look more like a bunch of celebrities than like myself.
So no, I’m not too worried about it. Maybe the government has access to that kind of technology AND the photo database that would be needed to make use of it, but I don’t think that stuff is available to the rest of us just yet.
It amazes me how many people write about an app like MedRef without ever telling you anything new about using it. Realizing that MedRef did more than the typical transport of data, I did a bit of research trying to find out what other people experienced while using the app. All I found was the typical cant about privacy and Big Brother. Enough of that nonsense! Here’s the real scoop about MedRef.
First, the facial recognition is done through a web service called betafaceapi.com. Lance Nanek discusses the code he uses on his blog, NeatoCode Techniques. His code is open source on GitHub and the link to it is available from his blog.
Since most of the hype regarding MedRef stems from its use of facial recognition, you may be more interested in this next post where I discuss my experiences with Betafaceapi and how you can check it out for yourself without Glass.
* * *
MedRef is designed as a way to organize and access medical records. It does some neat things I haven’t seen before, like the “Pin” feature:
Pinning MedRef places the MedRef “card” just to the left of the home screen so it is always easy to access.
When you tap the MedRef card, it gives you this option:
You can create a patient by tapping this card and then saying the patient’s name aloud. I created a patient named “Audrey”, which made a card for Audrey to the right of my home screen. By tapping Audrey’s card, I got this option to add a note:
Another tap gives you the opportunity to record a note. I said the words “test note” (not especially creative, I know) and that turned Audrey’s card to a bundle that looks like this:
Tapping the bundle allowed me to access the note which looked like this:
So that all works well. After several attempts, I still have not figured out how to link a photo with the file. It isn’t explained in the blog OR the video since the patient’s file with photo is already set up before the video demo takes place. I’ve tried sharing a photo with both of these cards:
The only response I’ve received was this one:
From watching the video, I suspect that the app would only try to match a photo with photos in a patient file, which would not work if there are no photos in the patient file. That’s only a theory, and this app seems more like a proof of concept than a finished product.
So I went to Betafaceapi to see what they had going on in the facial recognition department. You can read more about that here.
As far as MedRef goes, I think it has a lot of potential, and it’s great to see a Glass app do something more than send a photo or put a headline in front of your face.
I’d been wanting to see how Glass measures up to the challenge of night photography. As luck would have it, I found myself in Branson, Missouri, which seemed a perfect spot for a nighttime photographic study.
So here is Glass vs DSLR at night with subject matter you’ll never see in California or New York.
There is no DSLR on that last one, but I couldn’t resist including it.
Most of the DSLR shots were done at ISO-1600 which allowed me to get a decent exposure without a tripod. Of course the DSLR shots would have less noise with a long exposure at ISO-100, but that wouldn’t be a fair comparison at all.
These Glass shots have a 1/15 sec. exposure at f/2.5. Glass changes the ISO as light conditions change. These shots have various ISO speeds like 363, 418, 551, 678, 727, 960, to name a few. Quite a range, and certainly a brilliant way to get properly exposed photos without the use of a flash.
When I did pro wedding photography, 1/60 sec. was considered a good exposure for handheld shots. However, your head is pretty stable, so 1/15 seems like a good bet for clear shots from a headheld camera. Once you are using the lowest F-stop and the longest safe exposure, the only variable you have left to work with is ISO. Glass reads the light and picks the appropriate ISO, and there you have the best possible photo.
So even though I think the nighttime DSLR shots are generally better, I am very impressed with the Glass shots. I had to set the ISO with my DSLR and then monitor my exposures to make sure they weren’t going too long. I was using complicated settings on a complex camera and I had the benefit of years of SLR experience. The question is not really whether a DSLR can capture better images. Of course it can! And the more effort and expense you are willing to invest, the better your DSLR results will be.
The question is, what can Glass accomplish with virtually NO effort and no expertise…? Can Glass capture the moments you want to remember?
In my opinion, the answer is YES!
Here are a couple of new Glass macro images, this time showing the inside of Glass.
The screen is on, but you can’t really tell without looking at the reflection. Look at the reflected (lower) prism and you can see the time of 8:14 shown backwards, small and faint.
Also, if you look to the right of the prism you’ll see the inner camera/light sensor. This is what makes head detection possible. I believe it is also responsible for wink detection, although so far I’ve not been able to confirm that. Here’s a closer look: