Salesforce last week in Chicago announced its AI Cloud during Connections, its marketing conference. AI Cloud incorporates many aspects of ChatGPT, and the moniker is all over the offerings as in Einstein GPT or, more formally, Einstein GPT Trust Layer.
The nice thing about the Trust Layer, according to the company, is that it makes AI Cloud open and extensible and provides data privacy and security. All this is good, though the skeptic in me thinks invoking trusted seems like priming us to think a certain way.
Trusted? How can they say that? Let’s get wonky.
Editor’s Note — June 13, 2023:
See Denis Pombriant’s follow-up column Salesforce’s Trusted AI Layer Makes Sense After All.
If trusted is a present tense modifier, it implies an accumulation of interactions (or something) that built that trust. Salesforce certainly has developed such a level of trust.
Still, the AI product hasn’t been out long enough to be trusted even in a majority-of-those-surveyed way. If trusted simply means a week ago, so what? But if what they really mean is trustworthy, then I think they have a reasonable case.
Salesforce goes to great lengths to discuss what language learning models it uses, the process by which its AI products make recommendations, and how clients use those recommendations.
Based on the accumulated data about Salesforce’s prior history regarding all its interactions with customers, then certainly, in general, we can say that the company is worthy of trust, trustable, and maybe even trusted. Generally, however, that’s not the same as the new product.
So, yes, trustworthy is plausible, and in the fullness of time, we might get around to saying that Salesforce’s AI products are trusted. But rolling out trusted today undermines their whole case because it tries to ram down our throats Salesforce’s opinion of itself.
The whole trusted thing may be overblown anyhow.
Closer to the bone, businesses spend money for two fundamental reasons that have not changed since the last time I dragged out this hoary aphorism. They spend money, i.e., buy our products, to help them save money or make more of it. Doing that, in this context at least, requires increasing precision in marketing and selling. Another hoary aphorism, in that case, is “right product, right customer, right time.”
The exactitude that Salesforce’s AI products enable users to zero in on all these right ideas. So, the riddle for me is why Salesforce is messing around with a nebulous term like trusted when it could be selling the thing that decision-makers care a lot about, precision.
Pursuit of Precision in Modern CRM
Precision doesn’t exactly roll off the tongue. Though, with repetition, I bet it could. Interestingly, to me, it seems that’s where we are in CRM today.
We’ve been through a long learning curve, beginning with simple single-use client-server apps in the 1990s, and with every iteration of the product set, we’ve inched closer to precision.
We consolidated databases, eliminated duplicates, added a ton of functionality — like reporting, analytics, and code generation — and made CRM indispensable for managing the front office. What’s left?
Because who wants to pay for a simple database that can’t accelerate business? Yes, for sure, we don’t want a powerful engine that exposes our most sensitive information to outlaws in an increasingly Wild West environment. But to me, that should be standard equipment like getting tires when buying a new car.
Precision is different. It’s the thing we all crave about new technology, the demonstrable deliverable that says you got what you paid for. You can’t easily say that about trust.
So, say and do anything you want about this; it’s still a free country, after all. But as an analyst, I can smile at your claims of being trusted, and what I’ll be looking for is precision.