CUES AI Guidance & Governance

A more engaging, web‑ready governance page—structured for clarity, trust, and quick scanning, inspired by modern governance layouts.

📅 Effective Date: ––2025 (to be finalized) 👥 Applies To: All CUES employees, contractors and authorized partners 🧰 Tools Covered: ChatGPT‑5, Microsoft Copilot, Teams Maestro, Zoom (external meetings), AWS QuickSight (including Q), and other approved AI tools
Member trust first Consent & ownership Controls & auditability Transparency & learning

Governance Pillars

Scan‑friendly principles that anchor responsible AI at CUES. Each card can link to controls, SOPs, and tool guidance.

Member Trust First

Protect the privacy, security, and trust of members and partners.

Compliance & Ethics

Align AI use with GLBA, GDPR, CCPA, and applicable regulations.

Transparency & Explainability

Communicate when and how AI is used; avoid misleading outputs.

Human Oversight

AI outputs are advisory and require appropriate human review.

Data Minimization

Use the least sensitive data required and apply privacy-by-design.

Consent & Ownership

Obtain explicit consent and respect ownership and deletion requests.

Collaboration & Innovation

Partner with regulators and members to evolve best practices.

Environmental Responsibility

Consider environmental impact and favor efficient solutions.

Policy Sections

Click to expand. This layout is designed for modular web publishing and easy reading.

1. Purpose

CUES embraces Artificial Intelligence (AI) — including Generative AI, Machine Learning and AI Agents — to improve operating efficiency and member engagement through safe, ethical and compliant use within our credit‑union association. This policy defines how AI is used responsibly with appropriate governance, privacy protections and security controls.

1.1 AI Strategy Alignment

Our AI strategy is organized around two pillars:

  • Operating Efficiency – We use AI so that repetitive work is reduced, analysis is accelerated, and teams can spend more time on high‑value member and business outcomes (e.g., faster reporting, summarization, workflow automation and scalable analysis).
  • Member Engagement – We use AI so that we can better measure engagement, identify trends and opportunities, and inform outreach and service improvements using both member engagement data and publicly available industry data, consistent with the Privacy Policy.

This governance policy works together with the AI strategy: the strategy defines where AI creates value; the policy defines how AI is used responsibly.

3. Acceptable Use

Employees may use approved AI tools to:

  • Draft internal communications, reports and training materials.
  • Summarize policies, research or regulatory guidance.
  • Support software development and IT operations.
  • Assist with productivity tasks such as emails, presentations and knowledge searches.
  • Conduct data analysis and reporting, provided member personally identifiable information (PII) is not exposed.
  • Orchestrate multi‑step tasks using approved AI agents to improve efficiency and consistency.
4. Prohibited Use

Employees must not:

  • Input or expose member PII, financial data or confidential business information into unapproved AI tools or environments.
  • Make binding legal, financial or compliance decisions based solely on AI outputs without appropriate human review.
  • Create deceptive, discriminatory or biased outputs; all outputs must be fact‑checked and verified for accuracy and fairness.
  • Generate marketing or public‑facing content without required review and approval.
  • Automate member‑impacting decisions without validation, testing and oversight.
  • Use member data to train or fine‑tune AI models without explicit consent and documented authorization.
5. Data Security & Privacy

Data protection is central to maintaining member trust. The following controls apply:

  • No unapproved PII input – No member or employee PII may be entered into AI tools unless the tool is explicitly approved, secured and documented for the use case.
  • Secure environments – Confidential documents may only be uploaded to secured, approved AI environments. Use encrypted storage, single sign‑on and role‑based access controls.
  • Draft label – All AI outputs must be treated as drafts requiring human review. Employees must label drafts as “AI‑Drafted” until finalized.
  • Public data – Publicly available data (e.g., industry reports, publicly posted leadership changes, NCUA financial filings) may be analyzed for business intelligence purposes, provided the use complies with applicable terms and does not violate confidentiality obligations.
  • Alignment with Privacy Policy – Any AI use involving member data must align with the CUES Privacy Policy and undergo review through the Organizational AI Use Case Governance process.
  • Data not used for model training – Customer and member data (inputs and outputs) are never used to pre‑train or fine‑tune AI models without explicit, documented consent.
  • Ownership & deletion – Members retain ownership of AI‑generated content. Upon request, personal data and generated content will be permanently deleted within 90 days of request or termination.
  • Data retention schedule – Retain AI outputs and associated data in accordance with CUES’ information‑governance schedule; limit retention to 12 months unless extended with documented justification.
6. Risk & Compliance

CUES will manage AI risks through structured frameworks and adhere to relevant standards.

  • Regulatory compliance – Ensure AI use aligns with applicable laws (GLBA, GDPR, CCPA, etc.), the CUES Privacy Policy and the Acceptable Use Policy. Employees must disclose AI involvement in external‑facing work.
  • Terms & Agreements – Maintain clear Terms of Service and Data Processing Agreements for AI services and vendors. These agreements are updated to reflect evolving obligations.
  • Certifications – Pursue and maintain recognized certifications for responsible AI development (e.g., ISO 42001).
  • Risk assessments – Perform the following assessments for AI use cases:
  • AI Impact Assessment (AIA) – Evaluate potential harms, benefits, biases and ethical considerations.
  • Data Protection Impact Assessment (DPIA) – Analyze privacy risks, data sensitivity and compliance requirements.
  • Transfer Impact Assessment – Assess risks of cross‑border data transfers and ensure compliance.
  • Environmental Impact Review – Evaluate the environmental footprint of AI models and operations.
  • Quarterly reviews – Leadership will conduct quarterly AI usage reviews to ensure compliance and identify emerging risks.
  • Acceptable Use Policy – Adhere to the Acceptable Use Policy that governs responsible use of AI services. Any violation may result in disciplinary action.
7. Intellectual Property & Content Integrity

CUES recognises that AI-generated content may not be original. Employees must:

  • Fact‑check and verify AI‑generated outputs.
  • Respect copyright and trademark laws; do not generate outputs that infringe on third‑party IP rights.
  • Acknowledge that members or customers retain ownership of AI‑generated videos and related content.
  • Ensure all content complies with CUES’ Content Integrity Policy. Inappropriate or harmful content will be moderated and removed.
8. Oversight & Accountability

Oversight ensures that AI use is consistent with this policy and that accountability is enforced across the organization.

  • Department oversight – Department heads are responsible for monitoring AI use within their areas. They must ensure that all AI use cases are documented in the AI Use Case Inventory and comply with this policy.
  • IT Team – IT Team will oversee the AI policy, review higher‑risk use cases and maintain the AI inventory. The committee will meet regularly and report to the leadership team.
  • External audits & red teaming – Independent experts will periodically audit AI systems for fairness, security and reliability. External “red teams” will test our systems to identify vulnerabilities. Summaries of audits will be shared with leadership and, where appropriate, the membership.
  • Reporting channels – Provide an open channel (e.g., Adverse Impact Reporting Form) for members, employees and partners to report concerns related to our AI systems. Reports will be reviewed by the Governance Committee and responded to in a timely manner.
  • Role‑based controls & moderation – Implement role‑based access controls and automated moderation for AI outputs. Audit trails will be maintained for accountability.
  • Enforcement – Misuse of AI tools may result in disciplinary action, up to and including termination of employment or partnership.

8.1 IT Team (Internal)

The IT Teamis an internal cross‑functional group. At minimum, representation includes:

  • Technology / Data & Analytics
  • Compliance / Risk
  • Organizational Development / HR
  • Legal (as needed)
  • Member Experience / Marketing (as needed)

Responsibilities:

  • Review and approve higher‑risk AI use cases (member data, external outputs, decision support).
  • Confirm privacy alignment and completion of AI Impact Assessments and DPIAs.
  • Maintain the AI Use Case Inventory and training requirements.
  • Coordinate external audits and red‑team testing.
  • Publish annual transparency reports summarizing AI use, risk assessments and incidents.
9. Employee Training & Enablement
  • Required training – All employees must complete training on secure, ethical and accessible AI use. Refresher sessions are required annually or when tools or regulations change.
  • Advanced modules – Specialized training on bias mitigation, consent management, AI impact assessments, data protection and accessibility are available through the CUES Learning Portal.
  • Community & partnerships – Employees are encouraged to engage with industry initiatives such as the Partnership on AI and the Content Authenticity Initiative. CUES may sponsor participation in external courses and certifications.
  • Diversity & representation – Training will emphasize inclusive design, representation and accessibility when developing or deploying AI systems.
10. Future Governance & Continuous Improvement

CUES will continuously evaluate new AI risks and opportunities. The IT Team will update this policy in response to regulatory or technological changes. We will:

  • Engage with regulators, standards bodies and peer organizations to remain aligned with evolving best practices.
  • Encourage members and employees to provide feedback on AI experiences via reporting channels and community events.
  • Review environmental impacts and adjust practices to support sustainability.
11. Meeting Recording & Note‑Taking AI Tools

CUES uses approved meeting recording and note‑taking tools to support transcription, summarization and action‑item capture while applying strong privacy, consent and data‑handling controls.

11.1 Approved Tools

  • Internal meetings: Teams Maestro (approved organizational tool).
  • External meetings: Zoom recording and transcription features may be used when required, subject to participant consent and approved storage.
  • Additional tools: Any additional meeting assistant tools require review and approval through the IT Team.

11.2 Meeting Types and Storage Expectations

  • One‑on‑one meetings: May be recorded with consent. Recordings and transcripts are considered managerial working materials and should be stored locally or in restricted‑access locations; they should not be stored in broadly accessible folders.
  • Organizational meetings (e.g., Leadership Team, Extended Leadership, CUES Everyone): If recorded, recordings and transcripts must be stored in CUES‑approved central locations as defined in the Meeting Recording & AI Note‑Taking SOP.
  • Department meetings: If recorded, store recordings and transcripts in the department’s approved SharePoint location as defined in the SOP.

11.3 Acceptable Use

Employees may use approved AI meeting‑recording tools for:

  • Transcribing internal/external meetings with participant consent.
  • Generating summaries, key takeaways and action items.
  • Searching conversation archives to surface insights and commitments.
  • Enabling leadership to review meeting trends and analytics.

11.4 Prohibited Use

  • Recording meetings without prior consent.
  • Sharing transcripts containing confidential data outside approved systems.
  • Allowing public access to meeting data.

11.5 Data Retention & Security

  • Recordings and transcripts must remain within CUES‑approved storage.
  • Retention follows CUES’ information‑governance schedule (≤ 12 months unless extended).
  • Exports must be encrypted and shared only with authorized parties.
  • If recordings or transcripts are downloaded, they must be stored securely and deleted when no longer needed.
  • SSO and role‑based controls apply to all meeting‑recording tools.

11.6 Supporting Operational Guidance

CUES will maintain a companion operational document (Meeting Recording & AI Note‑Taking SOP) that defines standard workflows, storage locations, consent templates, accessibility considerations and training guidance for staff.

12. Organizational AI Use Case Governance

CUES manages AI use through a structured, organization‑wide governance approach. This ensures AI is adopted intentionally, responsibly and in alignment with CUES’ mission, regulatory obligations and commitment to member trust.

12.1 Use Case Identification & Inventory

  • All AI use cases — exploratory, pilot or production — must be formally identified.
  • Each department is responsible for identifying AI use cases within its functional area.
  • All use cases must be logged in the centralized AI Use Case Inventory, maintained by CUES. The inventory includes status, risk level and documentation.
  • No AI use case may advance beyond experimentation unless it is documented and reviewed.
  • Each recorded use case must include, at minimum: business owner and department; AI category (Generative AI, Machine Learning or AI Agents); intended users (internal staff, leadership, members, etc.); current status (idea, pilot, production, retired); and risk level (low, moderate, high).

12.2 Purpose, Impact & Safety Documentation

For each AI use case, CUES requires clear documentation to support governance, oversight and accountability. Each use case must document:

  • Purpose – The business problem being addressed and why AI is appropriate.
  • Business Impact – Expected efficiency gains, quality improvements, cost reduction or member experience benefits.
  • Safety & Risk Considerations – Data sensitivity, compliance implications, model limitations, potential misuse and results of AI Impact Assessments and DPIAs.
  • People Impact – Impact on employees, workflows, decision authority and required training or change management.
  • Environmental Impact – Estimated environmental footprint and mitigation strategies.

12.3 Selection of GenAI vs. Machine Learning vs. AI Agents

CUES selects AI approaches based on business need and risk profile:

  • Generative AI (GenAI) – Use for drafting, summarization, ideation or conversational assistance. Outputs require human judgment and review. Variability in responses is acceptable.
  • Machine Learning (ML) – Use for prediction, classification, scoring or pattern recognition. Requires consistency, repeatability and measurable accuracy. Outcomes are driven by historical data and defined metrics.
  • AI Agents – Use for tasks requiring multi‑step execution or orchestration across systems. Automation should reduce manual effort while maintaining human oversight. Workflows often span tools, data sources or recurring operational actions.

12.4 Review, Approval & Scaling

  • Low‑risk internal use cases may proceed with department‑level approval.
  • Member‑impacting use cases (involving member data, decision support, or external‑facing outputs) require review by the IT Team.
  • Scaling any AI solution beyond pilot use requires:
  • Demonstrated business value and success metrics.
  • Documented safety, compliance and ethical controls, including completed risk assessments.
  • Defined ownership, accountability and training requirements.

12.5 Ongoing Review

  • AI use cases will be periodically reviewed to ensure continued alignment with organizational goals and regulatory expectations.
  • Use cases may be modified, paused or retired based on risk, performance or organizational need.
  • A transparency report summarizing active AI use cases, their impacts and any incidents will be published annually.
Appendix A: Current AI Tools & Automations (Inventory Snapshot)

Note: The authoritative inventory is maintained in the AI Use Case Inventory. This appendix provides a snapshot for awareness and may change over time.

Generative AI / Productivity Tools

  • Teams Maestro
  • Zoom recording/transcription (external meetings)
  • Microsoft Copilot / ChatGPT (within the Microsoft ecosystem)
  • Recording assistant device/app used for dictation or notes (as approved)
  • Higher Logic (CUESNet) AI features (planned/pilot as applicable)

Machine Learning, Development & Production (AWS)

Used for development, personalization, and production AI solutions.

  • AWS Bedrock: Approved for LLM apps, RAG pipelines, and enterprise AI solutions (personalization, internal assistants, member experiences).
  • Amazon SageMaker: Approved for model training, deployment, and experimentation (fraud detection, predictive ML, custom models).
  • AWS Personalize: Approved for personalization of content and member experiences.
  • AWS Q in QuickSight

AI Agents, Automation & Development Tools

Used for building AI workflows, internal tools, and experimentation.

  • OpenAI API: Approved for building custom GPTs and automation workflows (internal agents, chatbot integrations).
  • CUES Master Calendar agent
  • New CEO tracking/report agent
  • Mergers and acquisitions monitoring/report agent
  • Industry news monitoring/summary agent
  • CUESBot

Video & Content Production

Used for training, marketing, and content production.

  • Synthesia: Approved for training videos and internal communications.
Appendix B: Glossary
  • AI Impact Assessment (AIA): Evaluation of potential ethical, social and technical impacts of an AI use case.
  • Data Protection Impact Assessment (DPIA): Assessment of privacy risks and mitigation measures required for processing personal data.
  • Transfer Impact Assessment: Evaluation of risks associated with cross‑border data transfers.
  • AI Use Case Inventory: Centralized repository of all AI use cases, including status, risk level, owners and supporting documentation.
  • AI Agent: An autonomous or semi‑autonomous system that performs tasks across multiple steps or tools.
Appendix C: References & Links

This policy references several supporting documents and resources. Internal links will be added to direct readers to these materials on the CUES intranet. Key references include:

  • CUES Privacy Policy – details data‑handling requirements and member consent procedures.
  • AI Impact Assessment Template – provides a standardized form for evaluating ethical and social impacts of AI use cases.
  • DPIA Template – provides a standardized form for assessing privacy risks.
  • Acceptable Use Policy – governs responsible use of AI services.
  • Meeting Recording & AI Note‑Taking SOP – outlines operational procedures for recording and transcribing meetings.
  • AI Use Case Inventory – interactive dashboard listing all AI use cases with statuses and risk ratings.
  • Adverse Impact Reporting Form – allows members, employees and partners to report concerns related to our AI systems.
  • CUES Learning Portal – houses training materials and resources for employees to learn about AI ethics, data protection, bias mitigation and accessibility.
  • Partnership on AI & Content Authenticity Initiative – industry organizations offering resources and collaboration opportunities.

---

Revision & Maintenance: This AI Governance Policy will be reviewed at least annually by the IT Team and updated as regulations, technologies and organizational needs evolve. Employees, members and partners are encouraged to provide feedback via the reporting channels outlined above.

Use‑Case Lifecycle

A compact visual module for how AI initiatives move from idea → production → review.

Identify

Log the use case in the inventory.

Assess

Complete AIA/DPIA and required controls.

Approve

Department or Governance Committee review.

Scale & Review

Monitor outcomes; modify, pause, or retire.

Resources & FAQ

Key internal resources
FAQ: Can I put member data into AI tools?

No—unless the tool and the specific use case are explicitly approved, secured, and documented for that purpose. Default to data minimization and approved environments.

FAQ: Who owns AI‑generated content?

Members/customers retain ownership of content created with their data. CUES does not reuse member data for model training without explicit consent.

FAQ: How are AI concerns handled?

Reports are reviewed by the IT Team, triaged for severity, and escalated as needed. Findings are incorporated into quarterly reviews and annual transparency reporting.