Me and the Meta Ray-Ban Display. And a neural band.

I was outfitted with a tight strap around my right wrist. On head went a new pair of glasses. A display activated on demand and I could navigate the icons with little swipes and gestures of my fingers that the neural band detected. I found myself conjuring little apps like a weird wizard.

I’ve been down this road before. But you can do it soon too. Meta Ray-Ban Display Glasses and their Meta Neural Band are coming September 30 for $800. They’re really going on sale. And now, as smart glasses are increasing and the interfaces for them are starting to expand, and I leave Meta Connect and am sitting at San Francisco airport waiting for my long flight back home - wearing Meta Ray-Ban Gen 2 glasses that I’ve started testing, because they’re on sale now as of two days ago - I’m full of questions. Mournful John Williams music floating in my ears adds to the sense of uncertainty. What will I be becoming? What will all of us become?

What I really felt during and after my personal demo of the glasses, which don’t fit my prescription yet (so I had to wear chunky custom inserts), was a sense of augmentation. I don’t mean augmented reality. I mean body augmentation. Sensory augmentation. Audio and display and now a neural wristband…this sounds quite a bit sci-fi, doesn’t it? Am I wearing this to the store next? Am I going to greet my kids home from school with these on? Will I hug my wife with them on? Yes, because I’ll be wearing them to review. But how long after that? Forever?

There’s a show I’ve really loved the last few weeks: Alien Earth. You really must watch it. Sure, it’s about those dripping xenomorph aliens, and also a host of unknown new creature horrors. But the true aliens are among us.

The depiction of Earth, and its rule by a quintet of all-powerful corporations including Weyland-Yutani, hits very close to home in a world that feels like it’s tipping into oligarchy. There are a proliferation of various types of enhanced or synthetic people in Alien Earth. Cyborgs, androids, and even synthetic bodies with human consciousness inside. These new familiar-looking humanoids are even more unsettling than the dripping-jumping parasites in their murky glass tanks. The whole show makes me feel like I’m living in a moment impossibly surrounded by the concept of “alien.” And should we even consider these synthetics, these cyborgs, as different from us? Should I be feeling empathy instead of alienation? And…when might I start becoming a little more like them?

Meta’s whole new glasses family.

So. That’s a lot of dystopia to take in. Let’s pull back. Smart glasses have a lot of wonderfully assistive functions. I have friends who use them, and I’ve talked to others who have turned them into essential helpers. One friend’s dad talked to me about how they’ve been a massive help for his failing eyesight. He wants more tools for them. And I don’t know when they’re coming. But Meta’s making a massive bid right now to get even deeper into what it calls a “contextual AI.”

Meta won’t be alone here, not by a long shot. Google has its own glasses coming, likely next year: ones from Warby Parker and Gentle Monster and Kering Eyewear. Samsung is working on some. TCL and Rokid and Snap are working on some. Apple, with its Vision Pro already in the wild, and assistive AirPods already out there, is surely working on some.

That gesture band, it’s a type of language I’ve used in VR, at home, in contained spaces. This type of everyday pair of glasses now opens up a chance for me to be doing this all the time. I already have small taps on my Apple Watch. But I just…felt like it was an extension. I’ve had these “what if I had Force powers” thoughts, and we all have. These neural bands lead to thoughts of action at a distance. But they’re also literally sensory extensions of a sort. They’re designed to eventually, ideally, work adaptively based on your level of motor activity comfort. And would I end up changing my own motor behaviors as I adapt to them, too?

Our cognitive capabilities are already extended through phones, and I find my own memory diminished. I’m also getting older, but I lean on phones for my memory every day, every hour. Glasses already correct my failing eyesight. If new sensory extensions for glasses let me see and analyze more, how much will I lean on that, or adapt to it permanently? Where do I begin and these extensions end? And when will these new glasses work with my eyes, because they’re supposed to be glasses, and yet they don’t work with my prescription?

It’s an unending set of open questions and unknown paths.

There’s also this: Meta isn’t exactly a trustworthy company when it comes to deeply personal data and moderation. How much data do we want there, how comfortable are we with Meta as the mediator of our lives, via Meta AI? (Meta doesn’t allow other AI services on these glasses, but connected apps via the phone are coming. Disney is even testing them in parks to see how they work as assistive guides…for the non-display glasses, at least).

What will we be doing with these glasses, and when, and with whom? New tech is like this: full of branching questions. I might seem particularly confused for someone who has looked at AR and VR devices for well over a decade, but augmented reality - true augmented reality, something you always have on you, and is contextually aware of your world - is something totally new that I’ve been waiting for for a long while.

I’ve seen snippets. Augmented experiences, mixed reality apps. I dip into headsets for hours at a time. These dips in an immersive pool are ones I come out of again, and re-enter the rest of my life. The immersive tech gets left behind.

Wearable tech, though — true wearable everyday tech — stays on you. It’s constant. Smartwatches, for instance. Smart rings. Even your phone in your pocket, let’s call that wearable in a sort of way. Or everpresent to your person. You start to live life knowing it’s part of you. You may ignore it a lot of the time, but then your wrist buzzes, and you realize it’s there.

I’ve worn smart glasses for days — Meta Ray-Bans — and they can be my extended eye helpers if I ask curious AI questions about the world they may or may not answer, or capture photos, or take a call. But the battery runs out and I swap them out. Now, with a display, and a wristband, I’m taken back to the Google Glass days. I have hazy memories of wearing Google Glass on the train, to work, around home. It didn’t do much. It was more of a simple notification device. It didn’t look everyday.

But the Display, and the Neural Band, can go further. And Meta’s next steps are to build towards a 3D interface like Orion last year. Google’s going to introduce even more contextual AI via the phone in your pocket. I see myself adding pieces onto myself. How many will I want to charge? Will I keep augmenting? Am I starting down the road of being a cyborg? Are all of us?

Touching grass with glasses, is it even possible?

And now, the flip side. Or, side side.

I’ve been thinking about being more connected with the world around me. I’ve been dreaming about a landscape full of sentient things I can talk to. Tuning into hidden channels of the universe. I then sometimes meditate, pull back, try to erase the past and the future, and be in the present. The everpresent present.

A book I’m reading now, The Spell of the Sensuous by David Abram, steeps in these thoughts. Being more attuned to the hidden messages and languages of the world. The signals that we already listen to subconsciously, that we’ve built much of our cognition on.

There may be a distancing from that world that started with language, abstractions, thought. Concepts of time and space that removed themselves from the present space. I like thinking of this. I want to make theater and art that reflects on this. Maybe using glasses, like headphones in a headphone play. Like Viola’s Room, which I talked about in another issue.

But I also wonder about glasses as a distancing layer. Similarly, I’m paradoxically reading a book about how language removes us from the world. Should I stop reading the book? Should I stop trying to overlay ideas in the world, and just be in the world instead? Touch grass, etc?

A play I wrote recently, which I hope to workshop in an actual park, is called Endemica. It’s about, maybe, improv performers who have removed themselves from the chaotic and collapsing world. They’ve made a new world through consensus reality. But that reality isn’t holding. Now it’s all melting over and over.

We need each other to make our world. Our agreement on ideas is what makes reality exist. Or, does reality exist without that? Well, of course it does. Trees, wind, rain, sun. It’ll go on without us, I think.

So back to glasses. Maybe we can transcend our sense of the world, which may even be locked in in our brains, through new tech. Or is that just compounding the problem with more problems?

Tech is often a utopian dream at first, then a tangled mess later. Again, I wrote a play about that way back called Utopia Parkway.

These glasses and the various augmentations I’m starting to feel are the beginning of a strange new blend — an Intertwixting, if you will — that promises to layer between the physical world and the virtual. Our senses and ourselves. Michael Abrash and Richard Newcombe, deep thinkers at Meta Reality Labs Research, gave their far forward expectations during a developer keynote at Meta’s campus, deep in a building called The Museum. Social Superintelligence, contextual AI that lives everywhere and is ready for us at all points of interaction, something like a glue between a massive social network layer and ideas of a digital twin-mapped world, infused with AI that uses sensors all over us. Our eyes, our ears, our hands, maybe more neural sensors in the future.

We are the conduit for AI to live in the world. And then, that conduit becomes the path by which we interface with robots, and train them to work as extensions of us. Or replacements.

In a few weeks, Meta’s glasses will just have a display, a handful of apps, and won’t connect to your phone as deeply as you hope (or fear). And those gestures will only work with the Ray-Ban Display glasses. They might feel limited, or too fidgety, or confusing.

But what happens when the pieces of tech intertwine? When the interfaces become common, the AI becomes deeper, the apps more fluid? You could choose not to wear the glasses. But things like it will likely be out there in other forms. We’re already using them. We’re phonepeople now.

New times need new art. I’m hoping to make some. And in the meantime I’m also riding along the strange journey in very disturbing times.

A lot to chew on. I’m tired. I also interviewed Meta’s Andrew Bosworth and Vishal Shah, too. See you next week and I’ll tell you all about it.

Keep reading

No posts found