
Can I recognize you with these glasses? Will I, at some point soon?
I see you from across the room. I don’t know who you are. Do I know who you are? Have we met before? My glasses seem to be saying we have.
Does this creep you out? Does this fulfill a fantasy? What world is this where we have this ability? Is that world already here? This week, a New York Times report revealed that Meta’s ambitions in facial recognition in glasses could be manifesting right around the corner.
According to the Times report, internal Meta documents from 2025 point to a planned launch for a personal glasses feature that could recognize faces of people you might already know, called Name Tag. The feature was planned to debut at a conference for the blind, as an assistive feature. But also, the report says that Meta sees this politically chaotic time as a good time to launch such a feature, to avoid criticism from groups that might be overwhelmed with other distractions in the world…and namely, the US.
I wrote about my thoughts on all this at CNET this past week, so read up on my take there. I admit, facial recognition is sort of a fantasy of mine in glasses. After all, I want to be able to recall things around me better, to have something whisper suggestions to me. Or I think I do. Do I really? At what cost?
Augmented reality’s key fantasy currency is affording superpowers, sensory or cognitive or otherwise. Smart glasses right now dangle vague promises of that future without having all the pieces to do it yet. Facial recognition, though, that’s both a superpower and a superweapon.
Enough stories about ICE’s use of facial recognition tech exist already to turn me off to the idea of big tech getting its hands on these capabilities, but here’s the thing: facial recognition has already been possible, and there’s no putting that cat back in the bag. It’s not “if” smart glasses can do this. They already can, theoretically, if they just had access to the AI that allows it (which already exists), but tech companies have been hesitant to activate the feature because the acceptable social framework for this isn’t here yet. And, for such a delicate new technology like smart glasses, no company wants their own Google Glasshole moment, even though we’re already creeping towards it.
We’re already seeing cases of public anger against wearing camera-enabled glasses, and I’ve run into some situations where I’ve been asked not to wear them. Many people don’t even know I have them on, which is the strange in-between zone we’re in now with smart glasses. They’re not fully mainstream, but they’re also mainstream enough that the guy who ran across the field during the Super Bowl was wearing them and posted his POV video.
The answer to the future is to figure out how these sensory powers are moderated and regulated, how the social fabric around them is built. Like I said in my CNET story, what is our understanding of how we build privacy around when they do and don’t function? Do others get to know when they have a chance to opt in to the idea of me recognizing their face in the future? Are there timed windows when facial recognition works, and in particular places? Is it like Facebook itself, a complete labyrinth where our privacy settings are buried in endlessly confusing subsettings, seemingly deliberately hard to understand?
I think of the old Victorian days when someone would present a physical calling card to announce their presence. Before phones, which rang uninvited in houses. Before text messages, and text spam, and group chats, and Zoom calls with camera and audio subsettings to mask or filter. Social media apps with disappearing messages. Encrypted chat apps. Pokémon Go.
How strange, really, is facial recognition on glasses? I think it’s a sign of a new wave of advanced wearable tech, sensory augmentation apps, cognitive enhancements. We’re already wrestling with this when students are banned from wearing smart glasses during an exam, just like a phone would be banned because it can access AI (or anything else). And yet, things like AirPods are becoming ever more advanced at blocking and enhancing audio, or translating, or becoming a hearing aid. The same will happen for our eyes. And our sense of touch, or our muscles. Exoskeletons, enhanced shoes, neural bands.
We’re in the scariest time right now to think about something like facial recognition, because the society we live in — and our government, and the tech companies around us — are operating in irresponsible free fall. I don’t trust anything around me. How would we trust this?
And yet we trust tech, to use it. To put our files on it. To live our lives on it. To connect. To shop. To search.
I’m taking a deep breath and accepting that things like facial recognition can be incredibly useful, assistive. And dangerous. And so are superpowers. And this is the world of the tech I look at. AR and VR have largely been toys for a long time. But when they become tools, things become serious. Maybe that’s where things are now. With great power comes great responsibility, and so on. If Meta’s first out of the gate with this tech, maybe this year, the rest of the tech world better damn well lay down good ground rules around it. You might also say you’ll just smack the glasses off my face, too, but if everyone’s wearing them, it’ll be hard to do that.
Maybe we’ll all be able to ask “Who are you,” and then just know. Or maybe it’s better not to know. Who are you? I can always just ask. Or hand out my calling card. However that shall be done in the times to come. I’m having flashbacks to Neal Stephenson’s The Diamond Age…I better read that one again.