Are employees at your company surreptitiously using artificial intelligence tools like ChatGPT, Claude, Copilot, and Gemini for everyday business tasks? It’s likely. An October 2024 Software AG study found that half of all employees use “shadow AI” tools to enhance their productivity, and most would continue using them even if explicitly banned by their employer.
Increased productivity is a good thing, but unsanctioned and unregulated AI use poses risks. A February 2025 TELUS Digital survey found that 57% of enterprise employees admit to entering high-risk information into publicly available chatbots. This includes personal data about employees or customers, product or project details, and confidential financial information like revenues, profit margins, budgets, and forecasts.
A clear AI policy will help a business minimize the risks of using AI tools. These risks include leaks of confidential information, compliance failures, accidental copyright violations, and reputational damage. As AI becomes a routine part of knowledge work, every business—even small firms—must establish an AI policy to maximize the benefits of using AI while safeguarding the company, its employees, and its clients.
Risks Addressed by a Formal AI Policy
Unauthorized AI use can create several types of problems:
- Data security: Employees routinely paste sensitive data—including customer information, financial records, and unreleased products—into public AI tools, thereby losing control over how that data is used. That can make security audits nearly impossible and drive IT staff crazy. Notably, the free versions of ChatGPT (by default, it can be turned off) and Google’s Gemini can incorporate user data into their training models, making it possible that the information could be included in a discussion with someone else.
- Legal and compliance risks: Sharing protected information with non-compliant AI systems could result in penalties during regulatory audits, even if no actual data breach or harm occurs. For instance, using such systems to summarize patient records could violate HIPAA, while using them to analyze customer data could run afoul of the California Consumer Privacy Act (CCPA).
- Unintentional discrimination: Without clear guidelines, the use of AI can lead to unintentional discrimination in hiring, customer service, and decision-making. This may violate ethical standards and expose the company to legal liability.
- Employee confusion: The lack of a coherent AI policy leads to inconsistent practices and uncertainty about acceptable tools and proper procedures, resulting in reduced productivity and increased anxiety about AI use.
Essential Elements of an AI Policy
The specifics of an AI policy vary by the type and size of company, but at minimum, most AI policies should include the following:
- Permitted AI uses and tools: Clear guidelines on the types of tasks employees may undertake with AI assistance and a list of approved AI platforms for business activities
- Data privacy and legal compliance: Rules for safeguarding confidential, personal, and proprietary information when using AI, coupled with rules that ensure adherence to relevant industry-specific regulations and privacy laws
- Human oversight and transparency: Requirements that employees thoroughly review AI-generated content before use and disclose AI involvement when appropriate in client-facing or public materials
- Risk reporting and incident response: Clear instructions for reporting AI-related errors, security incidents, or potential misuses
- Ownership and intellectual property clarifications: Statements affirming that work products created with AI assistance belong to the company. These statements should also address any intellectual property considerations.
Building Your AI Policy
If your company doesn’t already have an established process for generating policies, AI tools can themselves provide a starting point when used thoughtfully. Here’s an approach:
- Prompt an AI tool like ChatGPT or Claude to generate a basic AI policy template. Be explicit about your company’s size, industry, and other relevant details, and be sure to specify that it must cover the elements listed above—you can paste them in. Iterate as necessary until the template has all the required sections.
- Review the generated template carefully, removing generic content and noting areas that need company-specific details.
- Ask for feedback on the draft from key stakeholders, including:
- Leadership to align with company goals and values
- IT team to verify technical feasibility and security measures
- Legal counsel to ensure compliance with relevant regulations
- Department heads to confirm that it will be practical to implement the policy
- Incorporate the feedback to create a policy that reflects your company’s specific needs while maintaining necessary protections.
Remember: An AI-generated template is for starting the conversation. The final policy must be tailored to your organization’s specific needs and thoroughly vetted by relevant stakeholders.
The rise of AI tools in the workplace isn’t just a trend—it’s a fundamental shift in how work gets done. Whether your employees are already using AI tools without oversight or are hesitant to use them due to uncertainty, now is the time to establish a formal AI policy. Start with the template approach outlined above, engage your stakeholders, and develop guidelines that work for your organization. A well-crafted AI policy will help your business harness the benefits of AI while minimizing its risks.
(Featured image by iStock.com/girafchik123)