As you walk down the hallway, an attractive coworker approaches. You’ve seen her around but blank on her name. Just then, “Ashley Tate” flashes on the perimeter of your visual field. “Hey Ashley,” you say with a smile. She smiles back.

Back at your desk, your gloved finger taps the name in the air in front of you and brings up Ashley’s LinkedIn profile. Turns out she’s a VP, and very much out of your league. Feeling a little silly, you swipe left, deleting the window.

Realizing it’s time to get your mind out of the gutter and back to work, you whisper a code word, then “Let me see the marketing department’s Q4 2019 budget proposal,” which, after a brief pause, magically appears in midair. Craving some tunes, you say, “Pick up where I left off with Coldplay, for my ears only.”

If that scene seems a bit far-fetched, it’s not. Within a year or two, we’re going to be living in a hands-free, video-centric computing world where artificial intelligence (AI) and augmented reality (AR) come together through a not-so-ordinary pair of glasses. And nothing will be the same … except human behavior, that is.

Google had the right idea with Glass. The blowback wasn’t because Glass was ahead of its time or a lousy implementation. Think of it more as a sacrifice to a society where radical change takes some getting used to. Also the battery life sucked, early adopters did act like glassholes and $1,000 was more than a little pricey.

It reminds me of the first portable music players. Walking around with a wire running up your shirt and earbuds sticking out of your ears seemed ridiculous at first. Now every smartphone has a jukebox inside and Apple caused an uproar when it deleted the iPhone’s headphone jack and declared wireless to be the new thing.

The point is, we’re so used to interfacing with computers through two-dimensional displays and keyboards and describing the world around us through crude textual interpretations of what we see and hear, that the very notion of accessing and sharing information through human senses seems strangely foreign.

Soon enough, that will change. Besides Google parent Alphabet, tech giants Apple and Facebook and secretive startups with major league funding like Magic Leap are all working on breaking the boundary between man and machine by bringing AI and AR to users through smart glasses.

Considering that we’re all just starting to get used to talking to smart virtual assistants like Alexa and Siri, I realize that sounds like a big leap. It’s probably a good thing that the first real product to hit the market was sort of a baby step from Snapchat parent Snap.

Last year people went bananas for Snap’s Spectacles: quirky looking $130 sunglasses that record 10 second video Snaps with a 115-degree field of view and upload them to your phone. If you want to share an experience or an event on Snapchat, just click a little button on Spectacles and friends will see it pretty much the way you did.

Spectacles’ budding popularity is a harbinger of far more sophisticated technology and useful capability to come in what will undoubtedly be the most important device category since smartphones.

Google just released a new version of Glass, this one for enterprise applications. In November, Bloomberg reported that Apple is readying its own move into digital glasses as soon as 2018. And the Cupertino company filed a patent application detailing its concept for AI smart glasses less than two weeks ago.

On an earnings call last October, Apple CEO Tim Cook said that AR poses “some really hard technology challenges,” but when it happens, “it will happen in a big way, and we will wonder when it does, how we ever lived without it. Like we wonder how we lived without our phone today.”

Better get ready for a world with no keyboards, remotes, displays or TV screens: a world of invisible computing.

Image credit Spectacles by Snap.

Portions of this post originally appeared on foxbusiness.com.