Trends

INSIGHTS

Reasoning in CRM

It gives some measure of the importance we give to AI that I went to two conferences last week and sat through two panel sessions on the subject. At CRM Evolution, I was part of the discussion in a breakfast session Paul Greenberg organizes each year. Then I flew to Las Vegas for the Oracle CX show. There executives involved in the adaptive intelligent applications product line tried to define the basics in a session for analysts and reporters.

I have to say that neither session was especially illuminating, which is not to cast aspersions on any of the participants but more to provide a gauge of how early we still are in the market cycle. If it seems hard to define AI today, it’s equally difficult to wrap our heads around its potential.

Who Know Best?

In Washington, at Evolution, people talked about the trust factor and how easy or difficult it will be to accept that an algorithm might know more about a situation than the user. For instance, a GPS system will “know” about road conditions that humans can’t see.

In Las Vegas, the discussion started with the now typical dystopian fear that algorithms or bots might be about to steal our jobs. For some reason, this seems to engender visceral fear in the population in a way that packing up factories and shipping them to low-wage countries might not.

It struck me, due to an accumulation of research, that while we might talk about lost jobs or trust issues, the reasons for unease about AI — or whatever we decide to call it — might be more fundamental. It might be that AI signals the replacement or significant diminution of a style of thinking that is uniquely human — something that has evolved with us — with a style of thought that has been part of our experience only since the Renaissance and the development of the scientific method.

First, let’s agree on terms. The broadly knowledgeable silicon- and metal-based intelligent life form that has lurked in science fiction for the better part of a century is still fiction and will be for some time. Those who are concerned about such an entity replacing us will have to wait many more years before something like HAL is available. Then, like the first steam engines, we’ll discover it’s too big to move around, so it will be limited.

The AI that we increasingly see in CRM and other business apps is rather one dimensional. It’s able to tell you the traffic but nothing else. It’s analogous to the robots on car assembly lines — each programmed to make a weld or grind a surface, but that’s it. Making an assembly line is a matter of setting up many robots in a row, each doing something different, and not empowering some super machine to do it all.

So what’s everyone so concerned about? Simply put, it’s the difference between deductive and inductive reasoning, and now we enter the weeds, just a little.

Pushing the Limits

Deductive reasoning is something we humans do well, and it involves beginning with a premise and deriving conclusions. Surprisingly, math consists of a lot of deductive reasoning. Certain assumptions or postulates start off the reasoning from which we make deductions. More generally, we can deduce from basic ideas too, like this famous syllogism:1. All men are mortal.2. Socrates is a man.3. Therefore, Socrates is mortal.Note, however, that getting a true and useful conclusion requires a true and useful assumption, postulate or statement. If we’d started with “All men have feathers,” we would have gotten nowhere fast, even though our logic would have been impeccable.

Politics is like that today, and without trying to hurt anyone’s feelings, there are a lot of examples of situations in which we move backward from conclusions to discover the premises it would take to get there — but that’s not the purpose of this piece.

On the other hand, inductive reasoning is the logic of science and the kind of thinking we all do sometimes, especially when there’s time — and probably paper and pencil. Inductive reasoning involves gathering data and applying statistics to discern patterns. It’s the heart of the scientific method and the reason we live in the world we do instead of one in which we’re all subsistence hunters and farmers.

Inductive reasoning involves the language of hypothesis and proof and theory — but not belief. We believe what the data tell us, not what we assume. When the data reveal something wrong about our beliefs, we change beliefs. We don’t work backward to discover our premises. Inductive reasoning is what drives AI, and I think it is the heart of our heartburn.

In both sessions I attended last week, someone in the audience inevitably brought up the trust issue — as in, “I can’t see how I can trust an algorithm and feel I simply must have the option to override it with my gut instinct.”

If I unpack this, I get the notion that we’re comfortable with our deductions and the premises they spring from, and it’s rather frightening to have to rely on not much more than statistics. Yet the times in human history when we’ve made progress are precisely those times when we pushed back the boundaries of premise and belief, and substituted cold, hard facts derived from data.

What’s different today is that we don’t have a single man like Galileo proposing that the Earth revolves around the sun, because that’s what his data tells him. We have millions of them — and their proposals are both profound and banal. In the process, we are rapidly pushing deduction to a smaller footprint than has ever been the case for humanity, and that can feel frightening.

Denis Pombriant

Denis Pombriant is a well-known CRM industry researcher, strategist, writer and speaker. His new book, You Can't Buy Customer Loyalty, But You Can Earn It, is now available on Amazon. His 2015 book, Solve for the Customer, is also available there. He can be reached at [email protected].

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

CRM Buyer Channels