CUES Acceptable Use Policy

Plain-language guardrails for how CUES staff should and should not use AI tools in everyday work, with emphasis on confidentiality, review responsibility, and approved use.

Staff guidanceEveryday AI usePractical guardrails
Approved tools onlyHuman review requiredProtect data

Executive Summary

This Acceptable Use Policy defines safe day-to-day use of AI tools at CUES. It is intended to enable responsible experimentation while protecting confidential information, preserving trust, and keeping staff accountable for the content and decisions they produce.

Core principle: AI can assist with research, drafting, summarization, brainstorming, and workflow acceleration, but it does not replace human judgment, policy compliance, or final accountability.

Allowed Uses

  • Drafting outlines, summaries, agendas, or first-pass communications
  • Brainstorming ideas, naming options, and content structures
  • Summarizing approved internal materials in approved systems
  • Generating low-risk productivity support with human review before use

Prohibited or Restricted Uses

  • Entering confidential or restricted data into unapproved tools
  • Using AI output as final fact without verification when accuracy matters
  • Delegating sensitive decisions entirely to AI without accountable human review
  • Uploading recordings, personnel details, or member-related information unless the use case and platform are approved
  • Misrepresenting AI-generated material as independently verified when it has not been checked

User Responsibilities

ResponsibilityExpectation
Verify accuracyCheck facts, numbers, names, quotes, and recommendations before sharing.
Protect dataUse only approved systems for sensitive inputs, recordings, and exports.
Disclose as neededBe transparent internally when AI materially shaped a deliverable or summary.
Escalate concernsReport harmful outputs, bias, privacy concerns, or unexpected behavior.

FAQ

Can I use public AI tools for work? Only when the tool and the data being used are approved for that purpose.

Who is accountable for final output? The staff member and business owner remain accountable, even when AI assisted the work.

What if a use case falls outside this guidance? Route it through the AI governance process and document it in the inventory.