Executive Summary
This Acceptable Use Policy defines safe day-to-day use of AI tools at CUES. It is intended to enable responsible experimentation while protecting confidential information, preserving trust, and keeping staff accountable for the content and decisions they produce.
Allowed Uses
- Drafting outlines, summaries, agendas, or first-pass communications
- Brainstorming ideas, naming options, and content structures
- Summarizing approved internal materials in approved systems
- Generating low-risk productivity support with human review before use
Prohibited or Restricted Uses
- Entering confidential or restricted data into unapproved tools
- Using AI output as final fact without verification when accuracy matters
- Delegating sensitive decisions entirely to AI without accountable human review
- Uploading recordings, personnel details, or member-related information unless the use case and platform are approved
- Misrepresenting AI-generated material as independently verified when it has not been checked
User Responsibilities
| Responsibility | Expectation |
|---|---|
| Verify accuracy | Check facts, numbers, names, quotes, and recommendations before sharing. |
| Protect data | Use only approved systems for sensitive inputs, recordings, and exports. |
| Disclose as needed | Be transparent internally when AI materially shaped a deliverable or summary. |
| Escalate concerns | Report harmful outputs, bias, privacy concerns, or unexpected behavior. |
FAQ
Can I use public AI tools for work? Only when the tool and the data being used are approved for that purpose.
Who is accountable for final output? The staff member and business owner remain accountable, even when AI assisted the work.
What if a use case falls outside this guidance? Route it through the AI governance process and document it in the inventory.
