At some point, there’s going to be an AR device that looks exactly like glasses. I also hope that they don’t have cameras. If they have a camera, then they also need an indicator that it’s on. Walking around, for example, NYC with people wearing the glasses equivalent of 2000’s style iPod white earbuds is probably creepy. There’s no way around it if they also have lights and cameras.
But AR needs reality to augment, and a lot of reality is percieved visually. So, without a camera, these glasses will be a lot less useful. If we look at Apple’s AR features as a guide to how it feels about cameras, you can see that they are pro-camera. At WWDC 2021, they demoed a feature where you can orient yourself in a city by pointing your camera at the surrounding buildings. This would be very useful in a heads up display.
So, we will probably have the front-end of a camera to take in visual information. But, AR could still be very useful without ever producing a visual from the hardware.
One obvious (and currently available) representation is a depth map. Apple is testing out LIDAR on iPads and use a depth sensor on iPhones for face detection. The representation is a mesh, not an image, so it might be acceptable and is useful for a lot of AR.
Another thing they could do is pre-process the video feed into a given set of layers in a neural network. The first few layers in an image processor usually do some down-sampling and feature extraction. I’m not an expert, but if these things cannot be reversed back into the original image, they might be acceptable. In current feature detectors, you can retrieve some bits of images (see this), so there’s some work to do. But even if this is ok, it’s a big public education project to get it to be socially acceptable.
But Apple has shown some willingness to talk about its privacy protecting “provably cryptographically safe” technologies and open them up to third-parties for verification, so maybe they’d be willing to go this route to get a “camera” into their AR glasses.