I think we'll witness, in this lifetime, Licklider's man-computer symbiosis. And I'm not alone. Today I write about what I think will be the next major milestone in this development: the advent of the next-gen operating system, a proactive artificial intelligence.

A major drive for technology is to make living easier. But this is itself a source of stress; technology makes life harder because we still have to tell it what to do and how to behave, and its interface is constantly changing. With the list of what we can do with technology growing every day, the operational stress is also becoming more and more important. Eliminating this stress factor will be our next challenge. We will need to solve it to keep up with our own technology.

To overcome this challenge, the future will need to become proactive in knowing what we want. I am anticipating an inflection point where instead of having machines respond reactively to expressed asks, (i.e. "Hey Siri, can you do X for me?"), a proactive AI will emerge that anticipates needs. And this new OS will ask instead, "Hey, it looks like you want Y right now. Is this correct?"

Going back to the problem. Using a machine is a known source of stress. We must learn how to use the machine in question, the interface must also balance information overload and information underload, and holistically it needs us to trust its behavior. Automation is helpful because it hides decision making to limit user exposure to stress. There be ethical concerns. They are outside the scope of this post. At the interface layer, right now we balance information and trust by tailoring the UI/UX to a general public. So if you're in the middle of the distribution and share the common opinion, you're in luck. But... if you fall in edge cases, if you're not part of the statistically significant "target market" or not using the machine in the most common way, most systems wont adapt to your needs. You either have a limited system where it's easy to do one thing and hard to do anything else with it or you have a system that can do many things and none of them well (without extensive training). These are some reasons why we need future OSs to be AI-driven, where the AI knows what we want.

One of my favorite examples of proactive AI outside the cellphone ecosystem (although not entirely) is Google's generated email response suggestions, aka Smart Reply, which accounts for over 10% of all responses sent via Google Inbox.

Now let's focus the topic around how such an OS might impact the cellphone. Most people have a phone, and as an added bonus, automation on this device is nascent (I can't think of too much my cellphone does for me autonomously besides buzzing when I get a new message). While today's menu navigation is patently complex, tomorrow should see this architecture obsolete. I expect next generation operating systems to anticipate intent, and when they know what we want, the question becomes whether or not to take action.

So here's my theory. Once our phones know what we want without asking, some  concrete things will happen. Power buttons will go away (they are already on their way out). Volume buttons too. Also, our phones will start asking permission to do things automatically that they didn't do before. Should your phone buy an airplane ticket for you after a call in which you confirmed trip details with a friend? And if your phone knows when you're hungry, should it prepare a food delivery order? Please confirm.

And that would only be the first phase. Because after our phones have sufficient data, the questions lose relevance, as these phones begin modeling our behavior, mapping what we want to new sensor inputs in ways that isn't tailored to the general person, but to the general you.

Anticipating needs means all your data can live on the cloud, and only on the device from 5 minutes before you need it, up to the second you don't. We wouldn't need to search for apps if they are found before we need them. And security will change too: what if phones could detect fear and call the police when you are in danger, or an ambulance when you feel pain? In the movies, sometimes the bad guy tells the person to answer the phone and say everything is OK. Should your phone tell the person on the other end that you're lying? I don't know.

And by the way, this type of technology - knowing what you're thinking, how you feel, what you want - these are questions that drive me, the rest of the team at Neurable, and other companies besides. It is my hope that with the right conversations around privacy, education, and user rights, this type of technology will be a force for good. In my opinion, an obvious first step is about education (and access to it): users must understand the decision making processes behind the AIs that serve them. LinkedIn, how do you choose people you think I know? Without the answer to this type of question, your AI will continue to creep us out.

Anyone with similar curiosities will want to watch how Amazon's Alexa changes over time, because it's already on this path, with new work including interest in deriving sentiments and behaviors from ambient speech to sell drugs. Unfortunately, we cannot predict how Amazon's moral direction will change as new features like this will come out.

In this piece, I did not consider ethical ramifications of this type of technology. If you are curious to understand some existing and very real concerns, I urge you to read Joichi Ito's "Resisting Reduction.