Google Project Aura hands-on: Android XR’s biggest strength is apps

by
0 comments
Google Project Aura hands-on: Android XR's biggest strength is apps

Teased at Google I/O, Project Aura is a collaboration between Xreal and Google. It is the second Android XR device (the first being Samsung’s Galaxy XR headset) and is expected to launch in 2026. Putting it on, I understood why the term “smart glasses” doesn’t quite fit.

Is it a headset? Smart glasses? Both? These were the questions running through my mind when I got my hands on Project Aura at a demo recently. it saw Like a pair of thick sunglasses, except with a cord hanging on the left side that leads to a battery pack that also serves as a trackpad. When I asked, Google representatives told me they consider it to be a headset that looks like glasses. They even have a term for it: wired XR glasses.

I can connect to the laptop wirelessly and create a giant virtual desktop in my space. I have up to a 70-degree field of view. My first thing is to launch Lightroom on the virtual desktop while keeping YouTube open in another window. I play a 3D tabletop game where I can pinch and drag the board to zoom in and zoom out. I see a painting on the wall and call out to find the circle. Gemini tells me the name of the artwork and the artist.

I’ve done all this before in the Vision Pro and Galaxy XR. This time, my head Not there. Stuffed into a heavy headset. If I wear it in public, most people won’t notice. But it’s not augmented reality that lets digital information dominate the real world. It’s much like using the Galaxy XR, where you see apps in front of you and around you.

A Google representative told me that everything I tried on Project Aura was originally developed for the Galaxy XR. No apps, features or experiences had to be reworked for Project Aura’s form factor. He is very big.

XR has a big app problem. Take the Meta Ray-Ban Display and Vision Pro. Both launched with few third-party apps, leaving consumers no reason to wear them. Developers also have to choose which of these gadgets they will invest in creating apps for. This leaves little room for small companies with big ideas to compete or experiment.

This is what makes Android XR attractive. Smaller players like Xreal can access apps developed for Samsung’s headsets. Android apps will also work on AI glasses from Warby Parker and Gentle Monster launching next year.

“I think this is probably the best thing for all developers. You won’t see any fragmentation anymore. And I believe more and more devices will converge together. That’s the whole point of Android XR,” says Chi Xu, CEO of Xreal.

This is a pair of Google’s AI Glasses prototypes from Google I/O. The version I tried last week looked similar.
Photo by Vjeran Pavic/The Verge

Slipping on Google’s latest prototype AI glasses, I get to watch an Uber demo in which an imaginary version of me is riding through JFK Airport. A representative calls an Uber over the phone. I see an Uber widget pop up on the glasses display. It shows the estimated pickup time and my driver’s license plate and car model. If I look down, a map of the airport appears with real-time directions to the pickup zone.

It’s all powered by Uber’s Android app. Meaning Uber didn’t have to code the Android XR app from scratch. Theoretically, users can simply pair the glasses and start using the apps they already have.

When I ask the Gemini to play some music, a YouTube music widget pops up, showing the title of a funky jazz mix and media controls. This is also using YouTube Music app on Android phones.

I have been asked to ask Mithun to take a photo with the glasses. A preview of this appears in the display And On the paired pixel clock. The idea is that integrating smartwatches gives users more options. Let’s say someone wants audio-only glasses with a camera. They can now take a photo and see how it looks on the wrist. This will work on any compatible Wear OS watch.

Photos taken from Google's AI Glasses prototype with K-pop-inspired effects. To the left is a pantry lit in pink and blue neon lights; On the right, a person is walking around in neon effects, Korean letters and concert lighting.

The Nano Banana Pro added a K-pop effect to a photo I took on Google’s AI prototype glasses. Not bad, although the “breakfast of the future” written on the left is in Japanese.
Photo: Google

I also try Live Translate where the glasses detect spoken language. I take Google Meet video calls. I get the Nano Banana Pro to add a K-pop element to the second photo I took. I try a second prototype with displays in both lenses, which enables a larger field of view. (these are No Coming next year.) I watch a 3D YouTube video.

It’s all impressive. I’ve heard some talk about how Gemini is actually a killer app. But when I’m told that next year’s Android XR glasses will support iOS, my head perks up.

“The goal is to give as many people as possible the ability to have multimodal Gemini in your glasses. If you’re an iPhone user and you have the Gemini app on your phone, great news. You’ll get the full Gemini experience there,” says Juston Payne, Google’s director of product management for XR.

Payne says this will be broadly true of Google’s iOS apps, such as Google Maps and YouTube Music. The limitations of iOS will include most third-party apps. But even there, Payne says the Android XR team is looking for workarounds. At a time when wearable ecosystem lock-in is at an all-time high, this is a breath of fresh air.

Google’s use of its existing Android ecosystem is a smart move that could give Android XR an edge over the Meta, which currently leads in hardware but only Now! Opened its API to developers. This has also increased the pressure on Apple, which has lagged behind on both the AI ​​and Glass fronts. Making things interoperable between device form factors? Frankly, this is the only way an in-between device like Project Aura is going to succeed.

“I know we can make these glasses smaller and smaller in the future, but we don’t have this ecosystem,” says Xu, CEO of Xreal. “There are only two companies in the world right now that can really have an ecosystem: Apple and Google. Apple, they’re not going to work with others. Google is the only option for us.”

Google is trying to avoid past mistakes. It is intentionally partnering with other companies to create hardware. This is evident in the distinctive design of the original Google Glass. Apps are pre-launched in it. Prototypes explore multiple form factors – audio only and display in one or both lenses.

Payne doesn’t shy away when I ask the big cultural question: How do you discourage glassholes?

“There’s a very bright, pulsating light if anything is being recorded. So if the sensor is turned on with the intention of saving anything, it will let everyone in the vicinity know,” says Payne. This includes questions for Gemini Any Camera related work. There will be clear red and green markings on the on and off switches so users can prove to others that they aren’t lying when they say the glasses won’t record. Payne says Android and Gemini’s existing permissions framework, privacy policies, encryption, data retention and security guarantees will also apply.

“There’s going to be a whole process to get some sensor access so that we can avoid some things that can happen if someone decides to use the camera incorrectly,” Payne says, noting Google’s conservative approach to providing camera access to third parties.

On paper, Google is making smart moves that address many of the challenges inherent in this sector. It sounds good, but that’s easier said than done before these glasses launch. A lot can change between now and then.

Follow topics and authors To see more like this in your personalized homepage feed and get email updates from this story.


Related Articles

Leave a Comment