Credit: Meta AI

Writing a story this week about what the now-highly-expected pair of Apple Glasses could contain in them, a recurring thought popped into my head: how many devices are we all really going to wear at the same time?

For me, that answer is many. I don’t wear as many wearables at once as I used to back in smartwatch testing days, mostly because if you’re testing watches, you can stack multiples on both wrists. But when it comes to eyes…well, you only have one set of those.

But I assume, when it comes to things you charge every day, most people’s patience wears pretty thin pretty fast. We charge phones. We charge earbuds. We charge watches. Maybe we charge rings. Or laptops, iPads. Glasses? Neural bands? The future I look at assumes that we’ll pick stuff to have on us, and that stuff needs to be charged, and well, how many pieces will we pick?

It’s something I started feeling in Intertwixt 4, when I commented on my early days of feeling like a cyborg. The Verge’s Victoria Song comments on wearable overload in her latest Optimizer newsletter, too.

Sure, there’s a sense of total wearable overload. But also, I’m seeing it from a particularly unique vantage point: I look at lots of different wearables for my job. You, the average person, don’t. You’ll only pick a couple, maybe. And companies need to realize they’re all competing for the same places, and there’s only so much people will tolerate. Also: these things need to work together, better, now.

Credit: Meta AI

A constellation of removed pieces

Part of the frustration, besides all the daily charging, is that a lot of these things have different charging systems. Some are Qi, some MagSafe, some USB-C, many of the rest with proprietary little magnetic cables or clips you have to make sure not to lose. I have dongles all over the place. I forget what all those abandoned cables do.

Also, a lot of these things don’t even talk to each other. Meta Ray-Ban glasses don’t recognize my Apple Watch. Fitbits and AirPods don’t naturally communicate except for basic Bluetooth. AI systems are different, voice commands too. And even in the same product families it can still get weird.

I see a need for triangulation, recognition, handing off and forming groups of wearables that work together in new formations when they recognize each other. AirPods and Watch? Fine. Glasses and Pixel Buds? Ok. Maybe you want the watch and the glasses, though: see the apps on the watch screen, control the glasses from your wrist. That makes a lot of sense, even without gestures.

Our own senses work together magically, and we don’t think about it much. Big tech companies are trying their best to either augment or replicate those senses across various methods, and these extra bits start feeling extremely awkward if we don’t have them work as seamlessly as our own senses work.

That’s not easy. Contextual AI dreams of having this all work automagically, but in my everyday wearing of watches, glasses, AR things, earbuds, etc etc, they have plenty of disconnects. Lag moments. Bluetooth static at intersections. Something doesn’t auto-swap devices. Glasses haven’t connected to Bluetooth for some reason or haven’t synced with the app. Turn off and turn on again, perhaps?

We need to manage this better on our phones, and for ourselves

I think it’s inevitable that a larger spectrum of wearable devices will be on us, but worn optionally. They’ll all communicate with our phones for the near and probably far future. We will have a sensory net. But, like the way we need better management of privacy, connectivity and identity on our phone apps (see my local friend Jonathan Bellack’s ongoing discussions of identity in his newsletter Platformocracy), we need the same on our peripherals via our phones.

Our phones are our core devices, and we know them, sort of. But the way peripherals connect to phones is a mess. Dongles and old protocols and accounts meant for one social network and glommed into another. I want a smarter hub for peripherals, a way I can see how they communicate, and even direct how they communicate with each other. Like a smart home for my body, I want to know what the pieces are. And I want to more easily finesse them.

Incoming call on my phone, that I can screen on my watch, and only have a few select things go to my glasses. Glasses perceive things I selectively activate, and only share to certain sources in certain ways. Watch as glasses dashboard, phone in pocket. I see a system of tiers. But again, maybe you don’t want to wear any of these. Let the system adapt to what you need. Don’t make us have to learn to behave like enhanced people wearing new gadgets.

And there’s also triangulation with how we live and connect with everyone else. Right now, my wife doesn’t know when I’m talking to her, or to someone on my glasses — and is that someone real, or AI? There’s a common language to living with phones and earbuds and watches that’s being awkwardly worked out, and glasses complicate the social norms even more. A growing range of non-voice gestural behaviors, while being more discreet, could also make your actions even more invisible from the outside, and not always in a good way.

This week I thought about how Apple could triangulate its devices to work with future glasses, which might solve some of these problems. Or, create more of them. And I wore display-enabled Rokid Glasses, which have prescription inserts that work with my eyes. Why doesn’t Meta do this with Ray-Ban Displays?

That’s all for this week. Lots more glasses and XR thoughts to come, because this is the season for it. Thanks again for reading, and please follow my other social adventures below.

Keep reading

No posts found