Part 5.4 of 6

AI Policies & Procedures

📚 2-2.5 hours🎯 Intermediate📅 Updated January 2026

AI Policy Framework

AI policies translate governance principles and risk management strategies into actionable organizational rules. A comprehensive policy framework addresses AI throughout its lifecycle, from initial concept through deployment and retirement.

💡 Policy Hierarchy

AI policies should integrate with existing governance structures: strategic AI policy at board level, operational policies at management level, and detailed procedures at operational level. Consistency across levels is essential.

AI Acceptable Use Policy

📝
Acceptable Use Policy (AUP)

Defines permitted and prohibited uses of AI within the organization.

Key Components

  • Scope: Which AI tools and systems are covered (enterprise, third-party, GenAI)
  • Permitted Uses: Approved use cases and applications
  • Prohibited Uses: Explicitly forbidden applications
  • Approval Requirements: What uses require additional authorization
  • Data Handling: Rules for data input to AI systems
  • Output Verification: Requirements for reviewing AI outputs
  • Incident Reporting: How to report AI-related issues

Common Prohibitions

  • Inputting confidential, personal, or proprietary data into unapproved AI systems
  • Using AI outputs without human review for critical decisions
  • Representing AI-generated content as human-created without disclosure
  • Using AI to create deceptive, harmful, or illegal content
  • Circumventing AI safety controls or content filters
  • Using AI to discriminate or make biased decisions

AI Procurement Policy

🛒
Procurement & Vendor Management

Governs the acquisition of AI systems and services from third parties.

Procurement Requirements

  • Risk Assessment: Evaluate AI system risk before procurement
  • Due Diligence: Assess vendor AI governance practices
  • Documentation: Require technical documentation, model cards
  • Testing: Validation testing before deployment
  • Contractual Protections: AI-specific contract clauses

Vendor Due Diligence Checklist

CategoryAssessment Questions
GovernanceDoes vendor have AI ethics policy? Governance structure? Risk management?
TechnicalModel architecture? Training data? Testing methodology? Performance metrics?
ComplianceEU AI Act compliance? Data protection? Sector-specific requirements?
SecuritySecurity certifications? Incident response? Vulnerability management?
SupportDocumentation quality? Training availability? Support responsiveness?

AI Development Standards

💻
Development & Deployment Standards

Technical and process standards for in-house AI development.

Development Lifecycle Standards

  1. Problem Definition: Document use case, success criteria, stakeholders
  2. Data Management: Data sourcing, quality, bias assessment, documentation
  3. Model Development: Architecture selection, training, validation
  4. Testing: Functional, bias, robustness, security testing
  5. Documentation: Model cards, technical documentation, risk assessment
  6. Approval: Risk-based approval process before deployment
  7. Deployment: Staged rollout, monitoring setup
  8. Monitoring: Performance tracking, drift detection, incident response
  9. Retirement: Decommissioning procedures, data handling
✓ Model Documentation Requirements

Every AI model should have a model card documenting: intended use, training data, performance metrics, known limitations, fairness testing results, and maintenance requirements. This supports both internal governance and regulatory compliance.

GenAI-Specific Policies

Generative AI requires additional policy considerations due to unique risks.

GenAI Policy Elements

  • Approved Tools: List of sanctioned GenAI platforms
  • Data Input Restrictions: What can/cannot be shared with GenAI
  • Output Verification: Mandatory fact-checking requirements
  • Disclosure: When to disclose AI-generated content
  • IP Considerations: Ownership of AI-assisted outputs
  • Training Data: Whether outputs can be used to train external models
⚠ Shadow AI Risk

Overly restrictive policies may drive employees to use unauthorized AI tools (shadow AI), creating greater risk. Policies should balance control with providing approved alternatives that meet legitimate business needs.

📚 Key Takeaways

  • AI policies should form a hierarchy: strategic policy, operational policies, detailed procedures
  • Acceptable Use Policy defines permitted/prohibited AI uses and data handling rules
  • Procurement Policy requires risk assessment, vendor due diligence, and contractual protections
  • Development Standards cover the full lifecycle from problem definition to retirement
  • Model documentation (model cards) is essential for governance and compliance
  • GenAI requires specific policies for data input, output verification, and disclosure
  • Policies must balance control with usability to prevent shadow AI