The phrase "artificial intelligence is like lily pads in a pond" sounds like a really bad Zen koan, but it is an important concept for properly setting expectations with modern AI solutions. If you expect your business AI solution to be awesome on Day 1, you're setting yourself up for disappointment and, possibly, failure. The lily pads explain why.
We've argued before that generalized artificial intelligence is generally unprofitable (and maybe impossible), but there are some real-world examples of just how hard it can be for A.I. to get competent at even very narrow tasks. For example, you'd have no trouble asking your friend for a restaurant recommendation, but it's really difficult to teach A.I. to perform the same task well.
Everyone from Elon Musk to the late Stephen Hawking to an entire research institute are warning about the dangers of an A.I. apocalypse, where some real-life version of SkyNet or HAL 9000 will rise up to wipe out pesky human life. This is probably alarmist, not just because general artificial intelligence like we see in the movies may be impossible, but because -- for the foreseeable future -- there's no money in it.
The White House recently requested information on the future of artificial intelligence. Since it's our job to think about these things, as builders of AI-powered assistants for businesses, our Chief Data Scientist and Co-founder, Byron Galbraith, was an awesome person to provide insights on the topics they're asking about. Below are his answers.