Happy spooky Halloween…it’s a night of strange spirits, beasts of the unknown. Let’s talk about the ghosts of telepresence…and the golems to come.

Image: ChatGPT
Whenever I hear a term that keeps popping up over and over again in meetings with different companies, I perk up and dial in. That term lately is one I’ve heard for years: Gaussian splatting.
Apple invoked it when I talked to members of the Vision team recently about Personas on Vision Pro. And Meta is using it for Hyperscape Capture on Quest. Personas and Hyperscape are two of the most impressive tech experiences I’ve had in mixed reality headsets in a long time…and they’re both indicating a trend that’s underway. We’re portaling our lives into the metaverse. But as I’ll get into in just a bit, things from the metaverse are going to start portaling out into the real world, too. William Gibson, eat your heart out.

Me with Apple’s Jeff Norris and Steve Sinclair, discussing Personas as Personas (which are made using Gaussian splats). (Credit: Apple)
Inside Out
Apple is doing some wild work with Personas in Vision Pro, and it’s now good enough that it can feel like your face is really somewhere else…even though it’s covered by a headset. That type of face scanning tech via Gaussian splats is resulting in something that looks weird in a 2D FaceTime, simply because it’s getting so close that it triggers the uncanny. But in headset, it’s the best analogue for in person telepresence that exists.
Meta has promised codec avatars for years and I’ve seen demos of how good they can be. Samsung is promising realistic avatars soon too for Galaxy XR. Apple is alone now, though, with presenting this technology in an actual product.
My expressions and my eyes, inside, they’re being cast out there. And as Apple told me in a conversation in-headset, the goal for Personas right now is to provide an authentic expression of ourselves we can share with others. True telepresence. Which gets invoked, in-headset, like magic ghosts. Friends that can pop into my world and chat for a bit. The spirit realm, made real.

This scan of Gordon Ramsay’s kitchen is made with Gaussian splats, seen in VR with Hyperscape Capture (credit: Meta)
Outside In
Then there’s the rest of the world. That same splatting technology is being used by Meta to scan entire rooms into VR. Hyperscape wants to make realistic spaces you can visit in headset, like a holodeck. The next step will be putting avatars in these spaces. So, the reverse of Apple.
I tried scanning my own home office with Hyperscape Capture, and it feels like stepping into a ghost memory of my own life. One I could share with others. Maybe someday with 3D splatted avatars as well.
These two will meet in the middle: real spaces, real people. How real? How good will our telepresence be?
And there’s more: AI. The Samsung Galaxy XR’s ability to have Gemini Live see everything I see in virtual and actual reality is a step towards another type of telepresence. As much as it can assist us, it can assist AI and help other tech understand our own perceptions and interactions.
I talked with Samsung and Google about where this contextual AI will go after Galaxy XR, into glasses and other halos of connected devices. What interests me about this type of camera and sensor-assisted headset AI is that, while it also helps assist us by having AI aware of our world, it also gives AI an increasing understanding of what our world, and our perceptions, are like. We are training Frankenstein’s monster to become aware of how to act in the world.
Which brings us to…the robots.

Credit: 1X
Golem future
The Peripheral (book one of a three-book series that’s midstream) was one of my favorite books of the last ten years, but I’ve always loved William Gibson. (The Amazon Prime show, sadly canceled, is pretty great too.) In case you’ve never read or seen it, a lot of the story involves virtual time travel via tech rigs that can project people into very richly appointed super-robots. Telepresence rigs of a future time, or robot versions of James Cameron’s Na’Vi avatars.
The 1X Neo home robot, which is available to order now, is a present take on this future trope. Neo promises to someday be a full-featured autonomous home robot, but for now this uncanny proto-Cyberman is remote-controlled by someone in a Quest headset. Training data is needed to teach the robots how to act, much like the way Nvidia trains robots inside virtual simulations in Omniverse software…but this time it’s in our physical world, modeling us.
I haven’t demoed Neo and it’s clearly not ready yet to really act on its own. But I’ve wondered about how a world of people living AR glasses will essentially be the training models for robots to come. Much like the golems of Jewish folklore, maybe we’ll invoke artificial beings to do our bidding. Or be hiding somewhere, puppeting them.
It’s also funny to me that “golem” means dumb, helpless, or “in a stupor.” That sounds exactly like the way I feel in a world full of accelerating AI.
Let those ideas percolate. The future is still forming. I bought some Junji Ito’s new book, Moan, and am reading Black Hole by Charles Burns. Let the ghosts of future times stroll through your brain, and think about what all the things that could come might mean.
Happy Halloween, and see you next time.
