Here are some new abstract macro shots of Glass. These are variations of the same view – some with more detail, some with a narrower depth of field. I couldn’t decide which was best so I figured I’d post all 4 of my favorites.
Prezi is great if you have a presentation to do. Their 3-D backgrounds are even more entertaining, but I need to work on my fonts to get them to stand out from the custom background I created, so I haven’t done that yet.
This is an excerpt from my work-in-progress Prezi about Glass. This emulates the Glass UI in a really terrific way that other portals don’t capture. Check it out!
Here are some new Macro shots that show more of the structural details of the Glass prism.
Check the Macros page for more great Glass shots.
Earlier this week we saw the first Glass app to use facial recognition, called MedRef for Glass and developed by Lance Nanek. In my research, I found that MedRef uses a web service for its facial recognition, Betafaceapi.com. Betafaceapi has a demo on their site where you can upload your own photos:
So I gave it a whirl. I uploaded my Google+ profile photo along with another two photos and got some interesting results.
The software maps your face and returns some of data. When I went to the “Recognition” tab and clicked “Compare with celebrities” I received an assortment of photos of actresses who apparently look like me, at least according to this software. I believe the multi-colored bar across the bottom of the photo indicates how strong the resemblance is. Then, when I clicked on “Compare with detected faces”, I got the results of comparing my three photos with the one photo I had clicked on.
Of course the software picks up a strong resemblance between the two identical pictures. However, it seems my resemblance to these other two photos of me is weak. The bar doesn’t even make it over to the green.
Yes, apparently this facial recognition software believes I look more like a bunch of celebrities than like myself.
So no, I’m not too worried about it. Maybe the government has access to that kind of technology AND the photo database that would be needed to make use of it, but I don’t think that stuff is available to the rest of us just yet.
It amazes me how many people write about an app like MedRef without ever telling you anything new about using it. Realizing that MedRef did more than the typical transport of data, I did a bit of research trying to find out what other people experienced while using the app. All I found was the typical cant about privacy and Big Brother. Enough of that nonsense! Here’s the real scoop about MedRef.
First, the facial recognition is done through a web service called betafaceapi.com. Lance Nanek discusses the code he uses on his blog, NeatoCode Techniques. His code is open source on GitHub and the link to it is available from his blog.
Since most of the hype regarding MedRef stems from its use of facial recognition, you may be more interested in this next post where I discuss my experiences with Betafaceapi and how you can check it out for yourself without Glass.
* * *
MedRef is designed as a way to organize and access medical records. It does some neat things I haven’t seen before, like the “Pin” feature:
Pinning MedRef places the MedRef “card” just to the left of the home screen so it is always easy to access.
When you tap the MedRef card, it gives you this option:
You can create a patient by tapping this card and then saying the patient’s name aloud. I created a patient named “Audrey”, which made a card for Audrey to the right of my home screen. By tapping Audrey’s card, I got this option to add a note:
Another tap gives you the opportunity to record a note. I said the words “test note” (not especially creative, I know) and that turned Audrey’s card to a bundle that looks like this:
Tapping the bundle allowed me to access the note which looked like this:
So that all works well. After several attempts, I still have not figured out how to link a photo with the file. It isn’t explained in the blog OR the video since the patient’s file with photo is already set up before the video demo takes place. I’ve tried sharing a photo with both of these cards:
The only response I’ve received was this one:
From watching the video, I suspect that the app would only try to match a photo with photos in a patient file, which would not work if there are no photos in the patient file. That’s only a theory, and this app seems more like a proof of concept than a finished product.
So I went to Betafaceapi to see what they had going on in the facial recognition department. You can read more about that here.
As far as MedRef goes, I think it has a lot of potential, and it’s great to see a Glass app do something more than send a photo or put a headline in front of your face.
I’d been wanting to see how Glass measures up to the challenge of night photography. As luck would have it, I found myself in Branson, Missouri, which seemed a perfect spot for a nighttime photographic study.
So here is Glass vs DSLR at night with subject matter you’ll never see in California or New York.
There is no DSLR on that last one, but I couldn’t resist including it.
Most of the DSLR shots were done at ISO-1600 which allowed me to get a decent exposure without a tripod. Of course the DSLR shots would have less noise with a long exposure at ISO-100, but that wouldn’t be a fair comparison at all.
These Glass shots have a 1/15 sec. exposure at f/2.5. Glass changes the ISO as light conditions change. These shots have various ISO speeds like 363, 418, 551, 678, 727, 960, to name a few. Quite a range, and certainly a brilliant way to get properly exposed photos without the use of a flash.
When I did pro wedding photography, 1/60 sec. was considered a good exposure for handheld shots. However, your head is pretty stable, so 1/15 seems like a good bet for clear shots from a headheld camera. Once you are using the lowest F-stop and the longest safe exposure, the only variable you have left to work with is ISO. Glass reads the light and picks the appropriate ISO, and there you have the best possible photo.
So even though I think the nighttime DSLR shots are generally better, I am very impressed with the Glass shots. I had to set the ISO with my DSLR and then monitor my exposures to make sure they weren’t going too long. I was using complicated settings on a complex camera and I had the benefit of years of SLR experience. The question is not really whether a DSLR can capture better images. Of course it can! And the more effort and expense you are willing to invest, the better your DSLR results will be.
The question is, what can Glass accomplish with virtually NO effort and no expertise…? Can Glass capture the moments you want to remember?
In my opinion, the answer is YES!
Here are a couple of new Glass macro images, this time showing the inside of Glass.
The screen is on, but you can’t really tell without looking at the reflection. Look at the reflected (lower) prism and you can see the time of 8:14 shown backwards, small and faint.
Also, if you look to the right of the prism you’ll see the inner camera/light sensor. This is what makes head detection possible. I believe it is also responsible for wink detection, although so far I’ve not been able to confirm that. Here’s a closer look:
I’ve talked to a lot of people about Glass, and the one almost universal Glassware idea is facial recognition. Never again feel the shame of forgetting someone’s name, etcetera, etcetera. There are countless iterations of app ideas, but they all rely on facial recognition.
Finally, here is an app that puts this dream a step closer to reality!
Check out the demo video for MedRef for Glass:
I’m excited to try this one out. For anyone else who wants to see it in action, the app is available at: https://medrefglass.appspot.com/
The Through Glass app allows you to see the photos that your fellow Glass Explorers are uploading from Glass to G+. You can see them on https://through-glass.appspot.com/ and on your Glass device.
Through Glass does what it says, and if you want to keep your finger on the pulse of the Glass community, it may be a great addition to your timeline. As for me, I think I’d prefer to check the #throughglass posts on Google+ where I have the ability to +1 and comment on them.
However, this brings an interesting topic to mind. I wonder what the Glass experience is like for Glass users who actually know other Glass users. I live in the midwest, and so far I’ve only met one person who was able to identify Glass. I recognize other Glass Explorer names from the communities, but I don’t really know any other Glass Explorers.
But when the invitations to pick up Glass started rolling out, I saw LOTS of G+ posts from people who were driving to Google to pick it up. I’d bet the Glass users in California are much more likely to know other Glass users. To them, these Through Glass posts might have more meaning. You might say “Oh, look! Pete finally managed a trip to the beach!” I wonder…