If you follow the world of consumer tech at all, you know that the race for augmented reality (AR) glasses is on. Apple, Google, and Facebook are pressing forward in the development of these devices; Samsung and Snap seem to be just behind; and there will be independent companies and other mobile competitors getting in on the action as well.
The race in this category is naturally about commercial appeal and the potential for popular applications. There is a great deal of talk already about what games could be played via AR glasses, or how such devices could improve day-to-day practices like navigation or messaging. There is also plenty of talk about the use of AR glasses in specialized capacities — such as by a construction worker assessing a worksite, or a doctor performing surgery. As we move ever closer to the true arrival of AR glasses though, it is also worth asking what the devices might do for disabled people.
Of course, answers to this question depend largely on the specific capabilities of the technology, as well as on what level or type of disability we’re talking about with a given user. Generally speaking though, there are some genuinely exciting potential benefits.
A Range of Smartphone Actions
Perhaps the most basic (yet still very impactful) benefit of intelligent AR glasses for disabled users will be the ease with which they make a range of smartphone actions more possible. It was actually years ago that Google Glass more or less pioneered the concept of smart glasses, and even then this was among the benefits.
In 2013, a USA Today profile on a disabled user discussed how, through voice and touch, Google Glass allowed someone paralysed from the shoulders down to take pictures, shoot videos, make and receive texts and calls, and even access the internet. Now, by this point many people who are even severely disabled have ways of doing some or all of these things. But Google Glass showed the potential of intelligent and/or AR-equipped glasses to package them all together, essentially putting most common smartphone actions on the table.
This benefit speaks to what is actually one of the real technological marvels of these glasses, which is that designers are going to be able to build high-powered internet connectivity into such a small physical space. Among a few other challenges, this had been one of the main issues holding up more serious advancement in AR glasses progress until recent years. Now, however, the technology is ready and waiting.
What’s essentially happened in this regard is that printed circuit boards (PCBs) have managed to grow smaller and more capable simultaneously. An Altium piece on modern PCB layout challenges explains that today’s electronics call for significantly more leads, components, and features on circuit boards, without the addition of actual board area. This in turn has led to new ways of creating denser and more compact PCBs without the loss of capability, which has in turn made smaller devices more powerful.
In the context of AR glasses, this means we can now reasonably expect these devices to have high-powered PCBs capable of high-speed, instantaneous internet and 5G connectivity, without having to be so bulky as to look like virtual reality headsets. Relatively ordinary-looking glasses will connect as easily as phones and tablets, which will in turn enhance the aforementioned smartphone actions disabled users will gain access to. Full, seamless web use by voice and touch will be right before our eyes.
Comfort is not something that’s discussed very often with regard to smart glasses — in part because some early concepts have been on the bulkier side (before the electronics were further refined and more compact designs became feasible). Indeed, in some of the concept designs we see there almost seems to be an unspoken acceptance that they may not be the most comfortable or fashionable pieces in the early going.
Whether or not that’s the case, what we’re referring to here is a different kind of comfort specific to disabled users — an automated responsiveness to conditions and preferences that will make day-to-day life just a little more pleasant. Specifically, it was a TechRadar report on Apple’s mixed reality that brought all of this to mind. Evidently, the tech giant is using a vague technology called “control circuitry” to be able to tell — through blink rate and pupil dilation — how a user is reacting to the environment. The glasses will use this information to adjust brightness as needed. This will sound small to a lot of people, but to a disabled user who may not have the option of putting on or removing sunglasses or a hat at will, it will be a welcome benefit.
Finally, it is also difficult to discuss all of this without feeling as if the glasses represent some version of progress toward NIDs. As discussed in the post ‘Could Neural Interface Devices Help Disabled People?’ these are borderline-science-fiction devices that are beginning to seem more real.
While the AR glasses concepts we’ve seen don’t quite approach the ambition of Elon Musk’s Neuralink (covered in the aforementioned post), they do begin to imitate some aspects of NIDs, in that they respond to non-verbal, non-touch signals. The Apple feature just discussed, for instance, interacts with the body and simulates a better environment for the user in response.
That’s not quite a leap into full virtual escapism, of course. But it does get us one step closer to making the world easier for disabled people to navigate with ease and comfort, which is ultimately the idea that makes any progress in AR glasses so exciting.