Generative AI can perform a multitude of tasks, yet it also produces errors, reflects bias, and can short‑circuit learning when over‑used. This page offers a concise checklist of permitted and prohibited uses, three short literacy modules, a quiz on AI in higher ed, and a quick citation guide.

AI Use at Puget Sound

What you may do:
✅ Use AI tools in your coursework only to the extent that your instructor explicitly permits it.
✅ When you do use AI (with instructor permission) as part of your coursework, you must always cite the AI model as described in the Citation & Transparency Quick-Guide below.
✅ Build your AI expertise through experimentation with prompting, image generation, and other use cases for non-academic purposes.
What you must not do:
🚫 Submit AI‑generated text as original work when it has not been explicitly permitted.
🚫 Provide personal, identifiable or confidential university data to AI tools that have not been sanctioned by the university.
🚫 Use AI to bypass learning objectives.
🚫 Violate the university's policy.
AI Literacy Micro-Modules

Below are three short primers, each designed to be read in about two minutes, that cover common pitfalls and best practices when working with generative AI.

Large‑language models predict the next word rather than retrieve verified data; when the context runs thin they invent material. This hallucination risk can surface as fake citations, imaginary statistics, or fabricated quotations.

Why Does AI Hallucinate?

Generative AI systems produce inaccurate content for three main reasons:

  1. Training Data Sources: AI models learn from internet data that contains misinformation, so they reproduce falsehoods without checking facts.
  2. How Generative Models Work: These systems predict what comes next based on patterns, focusing on plausible-sounding content rather than accuracy.
  3. Design Limitations: The technology can't distinguish between true and false information, so it generates new inaccurate content by combining patterns.

AI hallucinations result from flawed training data, pattern-based generation, and fundamental technology limits.

How to keep control

  • Treat AI output as a starting point, to help explore data and evidence, never a definitive source.
  • Ask the model for DOIs or URLs, then check them; fake links break on inspection.
  • Cross‑check facts in databases or peer‑reviewed literature.
  • Document any claim you keep so readers can verify it.

Take‑away: always verify with an independent source.

Deeper dive: .

AI echoes its training data, which often favor dominant groups. This can appear as the omission of minoritized scholars, stereotyped language, or skewed sentiment analysis.

Why Does AI Produce Biased Content?

Generative AI systems produce biased content for three main reasons:

  1. Training Data Sources: AI models learn from internet data that reflects existing social and cultural biases from human-created content.
  2. How Generative Models Work: These systems reproduce patterns from their training data, including discriminatory language and perspectives they've observed.
  3. Design Limitations: The technology can't identify or correct biases, so it amplifies existing prejudices by combining biased patterns in new ways.

AI bias results from biased training data, pattern reproduction, and the inability to recognize discriminatory content.

Spot and reduce bias

  1. Prompt for multiple perspectives or under‑represented voices.
  2. Ask the model to list its main sources and examine their diversity.
  3. Compare AI summaries with scholarly reference sources or peer-reviewed sources.
  4. Note in your assignment how you checked for bias.

Bottom line: inclusive prompts and critical reading can help reduce the impact of algorithmic echo chambers.

Deeper dive: .

Public AI services often store prompts and reserve the right to review them. Some retrain on user data. Uploading sensitive material can violate privacy laws or research ethics.

Why Does AI Pose Privacy Risks?

AI systems create privacy concerns for users in three main ways:

  1. Training Data Collection: AI models are trained on vast quantities of data, much of which is composed of copyrighted material, and using copyrighted works to train AI models may constitute prima facie infringement.
  2. User Data Retention: Every query, instruction, or conversation with ChatGPT is stored indefinitely unless deleted by the user, including sensitive data like personal details and proprietary information.
  3. Inadvertent Data Exposure: There is a risk of inadvertently capturing and exposing sensitive user data, particularly in chatbots where personal or confidential information is often disclosed.

AI privacy risks stem from unauthorized use of copyrighted training data, indefinite storage of user conversations, and potential exposure of sensitive information shared with AI tools.

Safeguards

  • Strip names, IDs, and unpublished data.
  • Avoid uploading copyrighted texts.
  • Use university‑approved tools for institutional data.
  • Consult your course professor or faculty advisor regarding high‑risk cases.

Key point: treat public AI tools like social media—anything you post could become public.

Deeper dive: .

How much do you know about emerging trends in AI and academics?

Citation & Transparency Quick-Guide

When any generative‑AI system influences your work, whether it drafts text, suggests code, rewrites prose, or analyses data, you must acknowledge that assistance. Good practice combines a brief disclosure statement with a formal reference. The templates below illustrate the minimum information required; replace the placeholders (Developer, Tool Name, etc.) with the details of the tool you used.

Style

Reference‑list / Bibliography template

Disclosure example

APA 7

Developer. (Year). Tool Name (model version) [Generative‑AI system]. URL

“Initial outline created with Tool Name (Developer), revised by the author.”

MLA 9

Developer. Tool Name, version X.X, Year, URL.

“Analysis assisted by Tool Name.”

Chicago 17 (note)

1. Developer, Tool Name, version X.X, interaction with author, Date, URL.

Include footnote (or endnote) number after the sentence that benefited from AI.

Transparency checklist

  • Identify the tool, developer, version, and date used.
  • Describe how the AI contributed (draft, summary, code comment, etc.).
  • Keep a copy of the chat, prompt chain, or output; instructors may request it.
  • If the AI influenced your thinking but none of its text remains, a short note (e.g., “Brainstormed topic ideas with an AI tool”) is still advisable.

When in doubt, disclose. Undocumented AI use constitutes plagiarism under .