It appears that I’ve somehow already spent almost half a decade between the video game and the realtime interactive software industries. For a good half of that time, I’ve been on a lot of projects in the emerging XR space (virtual reality + augmented reality + mixed reality.)
As one of the growing number of frequent users of XR technologies, I’ve noticed a huge problem with most XR platforms: text entry. If you’ve ever tried to type on a keyboard while wearing a VR headset — as a developer, for instance — you know this problem all too well. Lifting an HMD (Head Mounted Display, for the uninitiated) just to peck away at a keyboard in primitive 80’s computer fashion. In AR/MR headsets, you can usually leverage a keyboard; however, doing so still requires being tethered to the keyboard at a desk.
Let’s face it. Keyboards have been around too damn long and, though we’ve become somewhat accustomed to them, they really aren’t going to cut it in a spatial computing future where we value ergonomics, accessibility and portability. When computers become invisible and seamlessly integrate into our lives, nobody wants to be hauling around a spare input device; especially one as big as our keyboards today.
Touchscreen virtual keyboards (touch-type keyboards) have been a huge step in the right direction, with many features to aid in speed and reliability — small enough to thumb type, auto-completion, and of course (for better or worse) auto-correct to name a few. Despite this, the spatial computing XR future which aims to be compatible with healthy and prosperous human lifestyles should, simply put, not have the human body hunching over a touch screen, or using a keyboard at a desk, for hours on end.
Today, even leading devices such as the Oculus Quest, Magic Leap One and Microsoft Hololens 2 require some form of text input, and their default text entry interfaces still leave a ton to be desired. Usually they’re implemented in the form of pointing a controller at a button or virtual key and pressing a trigger or button to enter a single character at a time, or leveraging virtual buttons which, while robust, can be infuriatingly slow and finicky — especially if you make a mistake. There is no way I could feasibly write this whole article with just the current text input solutions. But I can type this sentence using a current XRKey prototype, and I can do so pretty reliably and quickly after a bit of practice.
I just typed that last sentence without touching any physical hardware. I’m officially announcing my work on XRKey; a novel text-entry solution for VR/AR and MR technologies.
But why? The reason is simple. People will need to communicate in written language on spatial computers. The current front-running solution to this problem is leveraging deep-learning AI networks for speech processing. This is called natural language processing (NLP) or natural speech processing (NSP.)
Although dictation seems to be an input method that is gaining significant traction, dictation suffers from a few major flaws:
1) Most current dictation solutions depend on NLP/NSP (natural language/speech processing) implementations which are largely cloud based and require an internet connection to work well.
2) Current NLP/NSP needs a relatively noise free environment; good luck using it at the train station, in a noisy cafe, next to a street, etc. This will no doubt be improved over time, however this is currently limited by our capability to extract speech audio from its ambient environment.
3) NSP/NLP can recognize letter names and known words; however, it may not recognize foreign language words, certain special characters or symbols, alternative spellings, proper nouns, uncommon names, alphanumeric codes, etc.
To solve these edge cases for the short to mid term, if not the long-term, we need a text/data input solution that can work offline with only the currently available hardware, and the solution we’ve come up with at Legendary Gameworks can do it. Additionally, the approach we’ve taken to this solution provides an additional benefit over dictation, namely, privacy for the entry of sensitive or private information and communications in public.
We are currently looking for community support, as well as some help in financing what I believe shall be the future of text and data entry for spatial computing. If you have any interest in this coming to fruition, I would invite you to please join us at www.xrkey.info where we will be posting our latest updates. I am looking forward to sharing more about what we’re working on in the future, we’ll be sure to announce when we have something to share publicly about XRKey.
Thanks again,
Austin