Almost everyone in your company is probably using AI. Microsoft and LinkedIn’s 2024 Work Trend Index survey found that 75% of global knowledge workers are using AI at work, and 46% started in the last six months. It’s likely not just those in your marketing department using AI either. Salespeople may be using it to draft pitch letters, human resources people may be using it to search for policy language examples, and IT people may be using it to debug code.
Gallup found that 44% of the chief HR officers surveyed have no idea how often their employees are using AI to do their jobs. What’s scariest is that 78% of AI users are bringing their own AI tools to work. That means it’s the wild, wild west in most workplaces, and that usually ends up with some unpleasant outcomes.
In fact, according to a McKinsey survey, some organizations have already had AI mishaps, with 44% of respondents saying their organizations have experienced at least one negative consequence. Such consequences can be caused by AI-related risks including inaccuracy, cybersecurity, intellectual property infringement and more, which can be especially damaging to public companies.
No matter your company’s size or sector, adopting a prudent risk management approach to AI is a necessity. Sadly, just like when social media first came out, most companies are slow to adopt policies for responsible use, but ignoring the rise of AI doesn’t mean it’s not happening. McKinsey found that only 18% of organizations have an enterprise-wide council or board that’s authorized to make decisions involving responsible AI governance.
The first step in any governance plan is to have an AI policy, and it should cover four components.
1. Have A Point Of View
What is your vision for AI in your workplace? In my agency’s policy, we start with the perspective that AI is empowering humans. We are human-centric by design, believe in augmentation over automation, and use AI responsibly and cost-effectively. Remember, many employees may feel that AI is out for their jobs. Be transparent about how AI could impact their future. A statement that doesn’t pass the smell test will do more harm than good.
2. Set Clear Protocols
What are the do’s and don’ts? What AI tools does your organization offer that are safe spaces for employees to play in? If your organization hasn’t invested in its own AI sandbox, what guidelines should employees keep in mind as they engage? Be sure to emphasize the potential for copyright issues as well as the need to protect sensitive information. Companies in regulated industries will need to be especially careful that their requirements are overlaid against any AI-generated work product.
3. Learn The Rules Of The Road
AI output is only as good as the input. Your policy should offer guidance on prompt engineering and how to manage outputs. For example, stress that employees should always consider any facts or research provided by AI tools as biased, potentially false or plagiarized. We’ve probably all heard examples of how AI can hallucinate, but what’s an example of bias? A colleague asked an AI tool to generate a picture of Nazis, and it came back with a photo of equally mixed Black, Asian, Hispanic and white people in uniform as the AI tool was programmed to show diversity. Make sure your policy outlines how users can fact-check AI-generated information and identify independent and original sources of any information provided by AI tools.
4. Merchandise The Policy
Your policy can’t just sit in a risk management book. It needs broad distribution, reinforcement and enforcement. Share examples of responsible use in internal communications, and ensure managers are up to speed as this rapidly expanding technology takes hold. If you don’t have a governance board, consider starting with a task force to at least ensure pain points and success stories are being heard and addressed.
Not having an AI policy in place is just bad business. Organizations are not only at risk of negative outcomes, but they will also lose out on the promise of efficiency and value this new tool brings to the bottom line. By adopting prudent risk management approaches to generative AI and creating a policy that can grow with your organization, it’s possible to avoid the pitfalls and reap the benefits of AI technology.