Why A.I. Agents Will Have No Choice But to Specialize

Posted by Alyssa Verzino on Apr 26, 2018 1:00:00 PM

 

28777163450_b18a5942a2_o

We've argued before that generalized artificial intelligence is generally unprofitable (and maybe impossible), but there are some real-world examples of just how hard it can be for A.I. to get competent at even very narrow tasks. For example, you'd have no trouble asking your friend for a restaurant recommendation, but it's really difficult to teach A.I. to perform the same task well.

Restaurant delivery service GrubHub tried. GrubHub failed. GrubHub failed repeatedly.

GrubHub finally cracked the nut, and it came down to domain expertise -- which, it turns out, GrubHub didn't have (despite thinking it did).

So, how hard can it be to tell you which restaurants are popular, especially if you have several million takeout order records to train your A.I. recommendation engine? As GrubHub found out, that simple problem is really hard, because almost all of its data was natively useless.

For a company like Amazon, recommendations are relatively straightforward. They use a comparative algorithm -- people who bought Item #1 also bought Items #2, #3, and #4 -- to suggest additional purchases you might like. Everything has a unique SKU, and if you're willing to buy the first item from Amazon, you're probably willing to buy its associated items -- the next book in a series, the wall mount for your TV, the fabric softener to pair with your laundry detergent -- from Amazon, too.

GrubHub had a fundamentally different problem. Move to a new address, or simply have your favorite shawarma place close, and GrubHub couldn't easily suggest a replacement.

That's because every restaurant has its own take on a hamburger, a Cobb salad, or chicken vindaloo. Some people think guacamole can have peas, or that potato salad can have raisins, so suggesting even two different versions of the same item can be fraught with peril. You can't make blind one-to-one recommendations of one menu item for another, lest you end up suggesting Chicago-style deep dish pizza to a lover of a classic New York slice.

GrubHub had no single SKU to make one-to-one menu comparisons across restaurants. Moreover, geography drives a lot of customer loyalty, so there was no way to say, "people who ordered the burger at Restaurant #1 also ordered the burger at Restaurant #2," because almost no one consistently ordered from both places, let alone ordered "similar" dishes. The standard comparative algorithm just wasn't possible.

To suggest food, GrubHub had to understand food, and it turns out that -- despite a crazy amount of food-order data and menu listings -- GrubHub knew almost nothing about actual food.

To make comparisons work, GrubHub had to first normalize the menu data, so that it was at least recommending comparable dishes. That required building a master taxonomy of menu items that acknowledges the generally accepted nuances of food.

Technically, a hot dog is a sandwich (no, really), but almost no one would equate a hot dog to sandwich when comparing two menus. What the data says and how the data should be used are two different things.

GrubHub unleashed some brilliant data scientists on the problem, looking for natural clusters of menu items, but never found a taxonomy that proved effective at recommendations. The answer wasn't more data, but more human expertise.

GrubHub brought in a cookbook author to help them build a smarter, more nuanced taxonomy of food that was based on academic understanding of food history, culinary techniques, and modern restaurant commerce. The company sought out domain expertise that could help organize data in a way that no off-the-shelf algorithm could.

In short, GrubHub brought in a human specialist to help train its specialist A.I.. It used human expertise to organize its A.I. training data into a useful form, because no general technique generated useful results.

This is the parable of specialist artificial intelligence: Hyper-specific A.I. agents designed to mimic expertise in very narrow area of knowledge. It isn't easy, but it's possible and, if it's done right, profitable.

That's why Talla is building domain-specific A.I. agents for your knowledge base, and why it's working to combine these specialist A.I.s into more broadly talented virtual workers with BotChain.

To learn more about how specialist A.I. can help your organization get more from its data, contact Talla today.

Topics: AI, Artificial Intelligence, future of ai