Computers are advancing in leaps and bounds and are increasingly displaying attributes of practical Artificial Intelligence: they may not yet clash with astronauts, but they definitely take over many daily tasks that used to depend entirely on human brainpower.
One area where this is felt is knowledge work tools: just look at Google’s uncanny ability to figure what search results you really need despite your ill-formed and misspelled query… and as this trend continues to accelerate, you can’t help but think: what capabilities would we want to surrender to this AI? What should it do for us – and what would we rather it didn’t? Because if we don’t discuss this, the software vendors will push at us whatever they think of, whether we like it or not!
I’m not talking about better spell checking, mind you; that is happening. But we can expect far more powerful stuff.
Think of the day when your computer will answer your incoming mail for you, based on its own judgment. That capability is somewhat available but fully practical maybe ten years away, I’d guess… but would we want it?
I see a number of levels of the AI’s help here:
- First, there is routine work that the Artificial Intelligence might do for us, like sort our incoming mail into categories – promotions, deals, work related, family, and so on. By now there are many tools and clients that already do that; they may vary in their accuracy depending on the algorithms they use, but those that survive are pretty useful.
- Second, AI might use predictions and recommend decisions to us, which is what Knowmail does today with its “Next Best Action” feature; but this is still in the nature of a recommendation, so we’re still in control.
- Lastly, the AI could actually take over applying its decisions – acting autonomously in our best interest.
Here, things become tricky – but actually, if we assume the AI is advanced enough, no more tricky than having a human assistant. The problems will be, (1) when can we be sure we’ve reached that level… and, (2) the trouble that will result if we err in that decision.
But just imagine that the day has indeed arrived, and AI can really think as well as a human assistant. What would we want this AI to do for us? Here’s my take:
- I’d want the AI to take over the drudgery, so I can focus on being creative.
- I’d want the AI to work across all my applications, by learning to use them, not just the application it’s a part of.
- I’d want the machine to know me well enough to do that my To do so it would have to observe what I do and learn over time. It would have to build a “mental” model of my preferences and work style.
- I’d want to have a say in the above model – I’d definitely want the computer to share the model it had constructed of me, and I’d want to be able to explicitly override its understanding of me if it got it wrong; at the model level, not the individual decision level.
- I’d want it to pass its individual decisions by me too – I might choose to let them pass, but I’d want the option of interfering as I see fit.
- I’d want the AI to respect my intelligence (however flawed it may think it is). Yes, that’s right. Even today I may find myself Googling a phrase with a grammatical error in it, and Google jumps to the conclusion that I made a mistake and eagerly corrects it for me. 99% of the time that’s fine, but I do need the option of an override – because 1% of the time I mean what I typed: I know the error exists in the page I’m looking for. Don’t assume I’m stupid!
- I’d want fine control of what the AI does or does not presume to do. In something as powerful as this, we’ll need a high degree of customizable “preferences”.
In short – I’d want to stay in control. The age of DWIM – letting the computer figure how to “Do What I Mean” – is still far in the future (say, 20 years?) and until then let’s be careful.
But when that day comes, I sure hope the computer and the human will be working together to enable great new things, new work modes, and new fun. Exciting times are ahead!