The frustrating thing about reading AI news is that it covers a lot of the stuff that doesn't matter. As someone who runs an AI company, invests in AI companies, and writes a newsletter about AI (and thus reads a lot of AI news), I thought it would be good to highlight 5 key ideas that I seem mostly missing from the frameworks people use to think about AI. I do talk to a lot of smart people who know these things, in fact, some of the ideas came from AI. executives I've spoken to about AI. adoption, but most people, I believe, are missing these key pieces.
1. AI innovation is more limited than you think. The latest wave of innovation is largely improvements in neural network technology by creating novel topologies, new training methods, and better hyperparameter tuning. AI fields like symbolic logic, evolutionary algorithms, and others have hardly been touched, and even for neural nets much of the work has been researchy, and is difficult to translate into applications. That's why we aren't about to go into an AI winter - because on the big scale of things, it's only been a small amount of AI research that has led to such massive gains and improvements. There is much much much more to come.
2. AI hardware continues to be a thing people ask me about. Anyone who reads this knows I'm a fan, but I regularly get questions like "hasn't NVIDIA won?" and "what would we use a new chip for?" But let me give you an interesting fact that an executive at a large company recently highlighted for me - $300 billion is invested in semiconductor research annually. That is a lot of money. And most of it has been invested in moving chips forward along the curve of Moore's Law. That is coming to an end. So, where does that $300B go? It won't cease to be spent. It will go into new chip designs - optical, analog, neuromorphic, etc. That will to all kinds of new innovation that we can't fully understand right now. But it will take time. Chips make take a decade from initial research to production in a device somewhere, but it's coming. And it will be a massive revolution.
3. AI is not (yet) electricity. I know this is a common framework people use to think about AI. I believe it is wrong. I've actually read a bit about the history of electricity in the past year to figure out if I see parallels in AI adoption. I don't. Electricity is much more standardized and fungible. AI is not. Until we get to the point where a unit of intelligence is a fungible thing (if we ever do) then I think the electricity framework fails for AI.
4. There is a coming phase shift in cultural/workflow adoption, probably 2-3 years out. Here is the example I like to give... in 1996, most companies had a process for making printed brochures. When the web came along, people sometimes designed a web page by following their old process for making printed brochures, and then giving the finished product to a web developer to turn into a web page. That was the wrong process. But AI is in that phase. It is being bolted on to existing workflows, which means sometimes limited functionality and effectiveness, much the way "turn this into HTML" was bolted onto an old graphic design process. As tools and workers of a new generation start to use AI more natively, this will hit a tipping point. The beginning of the phase shift is probably still 2-3 years away - where some companies really start to outperform others because of AI adoption, so the tipping point is probably 5-7 years away when the world flips over to the new paradigm.
5. And finally, the most frustrating thing of all, is that we are focused on the wrong problems. We are worried about killer general AI, which is a long way off, and we are worried about autonomous vehicles deciding whether to kill a pedestrian or a passenger. I believe these are both problems that shouldn't be given a lot of thought right now. There are more immediate issues, like the interaction between data and UI and advertising, and what Google and Facebook are doing with your data, and to your mind. I worry about tools like Waze, that may start to use AI to optimize for broader goals. For example, Waze knows I try to get to work by 8:30am, and you do to, and today I'm running a bit late and you are running a bit early. Would Waze send you on a route that is actually 1 minute slower, in order to help me get to work on time? Would it do that because it's "fair" to make sure the highest number of people get to work on time? Or, is that punishing you for being more responsible than me and leaving on time? Or would it send me a faster way because I am more valuable and click on more ads in the app? I worry more about the bias and optimization challenges that creep into our systems and algorithms as they run more of our lives, than I ever do about killer AI.
If you want to be ahead of the curve, you should give Talla a whirl and adopt the most innovative and progressive customer support AI automation platform on the market.