fbpx

Is AI Safe? : 6 Guidelines for Using Generative AI in the Enterprise

Is AI Safe? : 6 Guidelines for Using Generative AI in the Enterprise
Reading Time: 5 minutes

What is left to be said about AI in the enterprise? It’s still early days, of course. But over the last few months, generative AI has been labeled the most disruptive innovation since perhaps the Internet or even electricity. Some people see AI as a threat to human existence, and others as the salvation of humanity.

I guess time will tell. As for me, my AI-powered Roomba keeps getting stuck under the sofa, so I’m not quite ready to head for the bunker.

In the meantime, everyone I know is jumping on the AI bandwagon to try to better understand how its admittedly amazing abilities can be helpful, amusing or threatening. As with the Internet and electricity, organizations and people who adapt to take advantage of it will prosper, while those who either ignore it or are late to the table will risk being left behind.

ChatGPT may or may not pose an existential threat for Homo sapiens, but for this lawyer, it came pretty close. Two lessons I can take from this sorry tale: ChatGPT will make stuff up, and then it will lie about whether the stuff it made us is… well… actually made up. 

(Recently, I ran into this when ChatGPT cited some data I was looking for. And when I asked for the citation, and couldn’t find it, the AI shuffled its virtual feet, looked ashamed and admitted it might have made a mistake. Reminds me of my kid when he was 9.)

It’s super important to remember that ChatGPT and other generative AI systems are not advanced search engines; they are predictive engines. This leaves us questioning whether or not AI is safe. ChatGPT really doesn’t care if what it types is real or total fantasy: it’s just predicting — thanks to billions and billions of mathematical calculations — what the most likely next token (character, word, etc.) is. It may seem like you’re chatting with an actual intelligence, but really. it’s just a fancy roulette wheel.

The odds are pretty good that someone — or most everyone — of any technical bent in your organization is already playing around with ChatGPT and its relatives (DALL-E, Synthesia, GitHub Copilot, et al). So to quote one of my favorite movies:

via GIPHY

So, is it safe to use AI in the enterprise?

The short answer to whether or not AI is safe is “it depends.” The first thing to take into consideration is where the AI is actually running. If you’re using a public website like GitHub Copilot or OpenAI, your prompts and any files you upload are out of your control. The GitHub terms of service (TOS) indicate that code snippets can be used to validate their models; whether that’s seen as a security risk for your IP is something you’ll have to answer yourself. You can see from the discussion linked above that opinions differ.

However, you can download a variety of LLMs like Llama and run them locally on your own laptop. Better yet, get the bosses to cough up something with a bunch of Nvidia GPUs for some real performance. With a local LLM, you can train it on your own datasets and know the data is not going anywhere.

Now… here is the part I promised: 

General Guidelines for AI in the Enterprise

Regardless of where you run your AI experiments, some common sense guidelines can help you sleep better.

1-Monitor use:  This toothpaste isn’t going back in the tube. So rather than try to block all your engineers from trying out the new toy, at least find out who’s doing what and keep an eye on it. Make sure they know where the guide rails are, and that they try not to crash through them.

2-Use Public APIs with an abundance of caution: Cloud providers (AWS, GCP, etc) provide APIs you can use to programmatically call an AI model. That may be a great solution for your application, or it might be insanely stupid. At a minimum, consider whether you can or should send data in plain text or end-to-end encrypted, including in transit.

3-Examine which model(s) you use: External generative AI services have trained their models on, well, who knows? OpenAI, the up-to-now gold standard, is rumored to be getting dumber. With enormous computing power behind them, it’s safe to assume they’ve been trained on extremely large models, perhaps petabytes of data. Also, with enormous computing power behind them, it’s also safe to assume their compute costs are way over my current Visa credit limit, and at some point, brutally-honest economics will rear its head and they will have to find ways to cut costs, which (see above) might lead to degraded quality of output. 

You can download a variety of different open-source models for your own use, but unless your company name starts with Google or Microsoft, you might not have the deep pockets to sufficiently power these for reasonable performance. In any event, picking a model gets us to…

4-Be prepared to explain: Do you understand how and why the model gave you the results you got? If you don’t understand the credibility of the AI engine, it can lead to doubting the results.

5-Understand privacy and security: This is a two-way street. You need to ensure your customer’s PII data isn’t being shared in ways that could violate either their privacy or laws, and you need to make sure no one can use your AI tasks to steal your IP or hack into your systems. AI in the enterprise can bring together a lot of disparate bits and pieces that all have their own privacy and security concerns like code, data, database access, encryption, APIs and human factors. It might be good to bring together people from across the enterprise who have specific domain knowledge to review all the issues together.

6-Evaluate risk tolerance: Finally, how edgy are you? Does your organization push the envelope on tech, or are you late adopters? (Luddites aren’t going to be clicking on this article anyway.) Balancing risk with progress can be tough; categorizing all your work into different risk buckets could make it simpler. I like to consider the consequences of taking a risk — highly risky behavior that has a low score on the consequences scale is easier to swallow than even a slightly risky action with huge consequences. This is why aircraft have redundant systems, but maybe only one coffee pot. (Ok, coffee was a bad example. How about the microwave instead?)

I used to have a neighbor out in the country who was a retired farrier. That is, he took care of horses’ feet. About 150 years ago, being a farrier was possibly a ticket to a good-paying slot in life. But Henry Ford kind of took care of that. We still have farriers, but it’s because horses are a hobby for the wealthy, not a necessity for transportation and labor. My neighbor Chuck didn’t live in a fancy house or drive an expensive pickup. He enjoyed what he did… but I can tell you the guy who works on my Audi makes a lot more money and is in a lot more demand.

I trust you can see the moral of the story. Talk to an engineer to learn how AI can be used safely for your business.Â