Even at this early point in the AI era, we may be seeing a divergence. On one side, we can easily see vendors bending over backward to deliver ethical AI apps and services, and on the other, a real conundrum. I am not an expert on anything, including AI, but I analyze things and report on what I see.
The significant early determinant for the AI industry that I see is how a vendor deals with security. Salesforce has gone so far as to develop a layer of technology between raw data and its amalgamated use. They call it the Trust Layer, and it is equal parts technology and best practice. It is the best practices, still being formulated, that we should all take an interest in.
I recently wrote about Salesforce’s tenets of trusted and ethical AI. Among them are “Your data isn’t our product” and “Our product policies protect human rights.” Oracle is right behind, and it is introducing similar concepts at its CloudWorld conference in Las Vegas.
Much centers on the large language model or LLM. Oracle used the words, “no customer data is shared with LLM providers or seen by other customers or other third parties,” in one of its press releases. Salesforce says the same thing (see above), and that’s the crux of the issue. Who owns the data, and how secure is it?
What about all the personally identifiable information (PII) that’s already out there in the hands of social media companies? So, some of the biggest companies have a great deal of PII and have used it, while others promise to keep it secure. That will have to change in the new AI era simply because two completely divergent models can’t exist in the same market space.
Software Industry Leaders, Lawmakers, Behind Closed Doors
This is not idle speculation given that while Salesforce was producing Dreamforce in San Francisco and Oracle was generating CloudWorld in Las Vegas, many of the rest of the software cognoscenti met with lawmakers in Washington, D.C., to discuss AI’s future.
The group included Bill Gates, Elon Musk, Mark Zuckerberg, Sam Altman, Sundar Pichai, and others. Several acknowledged that we have to get this “right,” but no one ventured to guess what that meant when, really, there should be no debate.
The people in DC were essentially the ones who already have a huge amount of PII data and are building LLMs on top of it. Moreover, many of these billionaires got rich using PII data as their product. Call me cynical, but I fail to see how these people could be fair arbiters in this case. Or, to quote Upton Sinclair, “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”
According to the New York Times, Sen. Josh Hawley, R-Mo., called the meeting “The biggest gathering of monopolists since the Gilded Age.” Indeed. The article also noted that the press was not allowed into the meeting. But wait, there’s more.
The same article said that Senate Majority Leader Chuck Schumer, D-N.Y., “Has acknowledged a tech-knowledge deficit within Congress and had said he would lean on Silicon Valley leaders, academics and public interest groups to teach members about the technology.” If that’s so, where were academics and public interest groups? Why was the press excluded?
Maybe it’s just me, but the people at that meeting were precisely the ones who should not be teaching members of Congress about the technology.
The Path Forward
What to do? For starters, it would be great if the industry adopted Salesforce’s tenets or something close.
Realistically, it would be fine if all members of the industry pledged similar things in their own words. Better, there might need to be an industry group to which all vendors subscribe that promotes a vision of truth and ethics. It might also be useful to quash any direct communication between the Sultans of Silicon Valley and members of Congress.
I know there will be free speech objections to that last thing, but this doesn’t change anything. The purpose of keeping church and state separate is ethics, so the onus is on Congress to avoid the appearance of being too chummy with the industry. Industry executives and their companies are not the sole source of AI information; therefore, Congress members should be making more of an effort to court outside expertise.
Lastly, the problem of having two models, one where the PII is already in the hands of the industry and another in which vendors pledge otherwise, cannot continue for long. Without better and unified governance and ethics, AI may never get past making funky Christmas cards. That would be a shame.