Online tools that can conduct research, synthesize course materials and other content, and produce coherent writing have become a focus of discussion in higher education. ChatGPT, Gemini, and similar tools are powerful algorithmic language models that use artificial intelligence to analyze information and generate writing. As these tools become increasingly ubiquitous, Puget Sound is committed to providing guidance for how to negotiate AI in classroom pedagogy.

Here we provide some guidance for instructors related to these emerging technologies: 

  • Communication: Clearly communicate and establish a policy for the use of AI-generated text in your course through the syllabus and assignment prompts and outline the steps to be taken in case of suspected academic misconduct involving AI tools. 
  • Explanation: Clarify with students that struggle, challenge, and uncertainty are an integral part of learning and that the use of AI tools as a shortcut or workaround to such learning may undermine their goals of becoming informed, critical thinkers. Emphasize the value and relevance of the skills and knowledge being taught in the course, and how they are integral to a liberal arts education. 
  • Writing as Process: Emphasize to students the value of writing as a process that yields learning outcomes beyond the end product itself (and which AI use can undermine): learning to generate and draft ideas, editing and clarifying text, revising and refining the final draft. 
  • Intentional Assignment Design: Design assignments that require students to connect in novel ways to course content, class discussions, and personal experience, making it more challenging for AI tools to generate appropriate responses. Consider in-class assignments, oral presentations, and question-and-answer sessions that accompany/support written assignments (however, keep in mind how different assessment methods can potentially create barriers for students with disabilities). contains a number of additional suggestions for assignment and assessment.
  • Assessing Process: Consider ways to prioritize assessment of the learning process over (only) the end product by implementing low or no-stakes formative assignments to encourage students to build their skills over the semester, and using in-class writing to establish benchmarks.
  • Incorporate AI: Consider productive ways to incorporate AI-based tools into your teaching to prepare students for the use of technology in their future personal and professional lives. AI writing tools are increasingly impressive in their capabilities, but are still limited鈥攆amiliarization with them can help encourage critical thinking about digital literacy, sources of evidence, writing style, tone, and what constitutes effective written communication.
Assignment Repository
Download, review, adapt, or just be inspired by AI-related assignments from other Puget Sound faculty.
Teaching with AI: Sample Assignments

These five assignments, adapted from Yee et al (2023), treat ChatGPT as a text to interrogate, not a shortcut to answers. Each activity gives students a concrete task: shaping prompts, tightening searches, verifying citations, distilling long readings, or rebutting counterarguments. They then require students to pause and test the model鈥檚 accuracy, logic, or bias. The sequence builds 鈥淎I fluency鈥: students learn when to consult the tool, how to improve its output, and where human judgment must override it. Faculty can adopt the templates as-is or adapt them to course-specific goals; in every case, the closing reflection anchors technical practice in critical inquiry. Together the exercises model a classroom stance that is neither alarmist nor na茂ve: use the new technology, scrutinize it, and keep disciplinary standards firmly in view.

 

Yee, K., Whittington, K., Doggette, E., & Uttich, L. (2023). ChatGPT assignments to use in your classroom today. FCTL Press. 

Learning objective
Students test how follow-up questions sharpen or distort answers while noting where the model鈥檚 knowledge or logic breaks down.

Faculty setup

  1. Choose a discipline-specific puzzle (e.g., a sudden data anomaly).
     
  2. Require at least six turns: opening query, four follow-ups, and a closing accuracy check.
     
  3. After the chat, students annotate each turn: the intent of their follow-up and what changed.
     
  4. Peer partners read one another鈥檚 transcripts and identify a missed chance to verify evidence.

Sample sequence

  • Turn 1: 鈥淟ist three explanations for the sharp drop in Arctic sea-ice extent in 2012.鈥
     
  • Turn 2: 鈥淓xclude ocean-heat content; focus on atmospheric circulation.鈥
     

  •  
  • Turn 6: 鈥淪ummarize the leading explanation in 150 words and flag unresolved variables.鈥

Critical reflection
Students finish with a 250-word note on where ChatGPT speculated, hedged, or cited dubious data.

What Students Turn In
Annotated transcript and reflection.

Evaluation notes
Look for purposeful questioning, clear annotations, and thoughtful critique of AI limits.

Learning objective
Students practice narrowing AI searches and judging whether the resulting evidence is reliable.

Faculty setup

  1. Supply a broad question common to the course.
     
  2. Students produce an initial answer, then add two successive constraints (method, date range, population, etc.).
     
  3. For each version they note accuracy, depth, and citation quality, explaining any shifts.
     
  4. Close with a librarian-led discussion comparing AI search tactics to database strategies.

Sample prompts

  • Initial: 鈥淓xplain the role of public-private partnerships in urban water management.鈥
     
  • Constraint 1: 鈥淟imit to cities in sub-Saharan Africa after 2000.鈥
     
  • Constraint 2: 鈥淔ocus on partnerships involving community-owned utilities and report funding gaps.鈥

Critical reflection
Students identify one claim from each answer that requires outside verification and describe how they would confirm or reject it.

What Students Turn In
A short comparative table and a 300-word critique of the most informative version.

Evaluation notes
Assess clarity of constraints, depth of critique, and awareness of AI鈥檚 search blind spots.

Learning objective
Students learn to confirm every citation and classify common fabrication patterns.

Faculty setup

  1. Provide an AI-generated bibliography of 8鈥10 items鈥攕ome real, some invented.
  2. Students verify each entry in the library catalogue, mark status (found, altered, missing), and supply corrected metadata where possible.
  3. They group errors by type (non-existent journal, wrong volume, ghost page range) and suggest safeguards.

Sample prompt for students
鈥淕enerate an annotated APA bibliography (eight sources, 2019鈥損resent) on indigenous land-rights litigation in Latin America.鈥

Critical reflection
Include a 400-word memo on why large-language models hallucinate citations and how researchers can pre-empt the problem.

What Students Turn In
Verification table and memo.

Evaluation notes
Check accuracy of verification, clarity of error categories, and depth of memo.

Learning objective
Students compare AI and human abstracts, noting omissions, nuance loss, or misplaced emphasis.

Faculty setup

  1. Assign an article of about 8 000 words.
  2. Students request a 200-word ChatGPT abstract, then write their own 200-word version.
  3. Using tracked changes, they merge the two and highlight what each missed.

Sample prompts

  • 鈥淧rovide a 200-word abstract of Smith (2024) 鈥楳icroplastic Pathways in Estuarine Systems鈥 and state any data limitations the author notes.鈥
  • Follow-up: 鈥淟ist three concepts you omitted because of the word limit.鈥

Critical reflection
A 150-word note on how AI compression can distort argument structure or overstate certainty.

What Students Turn In
AI abstract, student abstract, merged version with highlights, and the brief note.

Evaluation notes
Evaluate fidelity to the source and insight into summarization risks.

Learning objective
Students anticipate serious opposition, gauge the strength of AI-generated rhetoric, and respond with evidence.

Faculty setup

  1. Students draft a thesis on a debated topic.
  2. ChatGPT supplies three counterarguments, each citing at least one study.
  3. Students rank the counterarguments, integrate the strongest into their essay, and rebut it.
  4. Footnotes mark every sentence influenced by AI.

Sample prompts

  • 鈥淕iven the thesis 鈥楿niversal basic income is fiscally sustainable in the United States,鈥 generate the three strongest counterarguments, each with supporting evidence.鈥
  • 鈥淪tate in 100 words which counterargument is strongest and why.鈥

Critical reflection
In 200 words, students consider whether AI鈥檚 counterarguments reflect particular ideological biases or selective evidence.

What Students Turn In
Ranked counterarguments, revised essay section (鈮500 words), footnotes, and reflection.

Evaluation notes
Measure relevance of counterarguments, depth of rebuttal, and transparency about AI input.

AI Detector

Detector Tools: Limitations

AI鈥慻enerated鈥憈ext detectors estimate probability, not authorship. Benchmarks at Stanford (2024) and Brandeis (2025) report false鈥憄ositive rates of 15鈥25鈥痯ercent, higher for multilingual writers, and false鈥憂egative rates above 20鈥痯ercent when students paraphrase or lightly edit AI output. Commercial detectors train on yesterday鈥檚 models; new releases quickly slip past them. Some services also archive uploaded papers, raising privacy concerns. A detector score alone is therefore insufficient evidence of misconduct.

Recommendation for faculty

Use detector results only as a conversation starter: compare the work with earlier assignments, invite the student to describe their drafting process, and examine citation patterns. AI detector scores are not considered proof of academic dishonesty.

Suggested Syllabus Language

Below and on this page are four sample statements describing different levels of generative-AI use, from complete prohibition to active encouragement. Feel free to borrow, adapt, or combine this language to suit the aims of your course. Some instructors may also wish to include a signature line (digital or handwritten) so that students explicitly acknowledge the chosen policy.

The LMIS Committee, which authored this language in spring, 2025, wishes to acknowledge Auburn University鈥檚 Biggo Center for inspiration in its development.

In this course, any use of generative-AI tools is not allowed and constitutes academic dishonesty. All work must be the product of the students themselves. Students may not consult or get assistance from a generative-AI tool, whether directly or indirectly. If you have questions about the use of AI in this course, please ask your professor.

In this course, all work that students turn in must be produced by the students themselves. Any direct use of AI-generated material within an assignment constitutes academic dishonesty. Students may use AI in indirect ways, such as consultation for brainstorming purposes or as a study assistant. However, the student should be forewarned that AI tools are frequently inaccurate and students are responsible for their own study processes. Further, AI-generated material must not be copied or paraphrased into an assignment. If you have questions about the use of AI in this course, please ask your professor.

In this course, certain assignments will permit the use of AI. These assignments will be explicitly designated. In addition, students may use AI in indirect ways, such as consultation for brainstorming purposes or as a study assistant. Students must always disclose the use of AI-generated material, and specify how it was used. The student is also responsible for any misinformation or legally protected data that they incorporate into their work. If you have questions about the use of AI in this course, please ask your professor.

An example disclosure might be as follows: 鈥淚/we used [tool name], a generative AI created by [tool provider], in the preparation of this paper/report/thesis/etc. It was used for brainstorming, text generation, grammatical editing, and citation generation.鈥

In this course, students are encouraged to use generative AI, to assist with course work (unless specifically told otherwise). Students must always disclose the use of AI-generated material, and specify how it was used. The student is also responsible for any misinformation or legally protected data that they incorporate into their work. If you have questions about the use of AI in this course, please ask your professor.

An example disclosure might be as follows: 鈥淚/we used [tool name], a generative AI created by [tool provider], in the preparation of this paper/report/thesis/etc. It was used for brainstorming, text generation, grammatical editing, and citation generation.鈥

Academic Honesty

Academic Honesty

One of the main concerns many faculty have with the use of ChatGPT and other AI-based tools in academic settings is the potential for academic misconduct. As these tools are constantly evolving and students may use their text in different ways, it is challenging to accurately identify AI-generated student submissions. Because AI writing detectors are easily subverted by light editing of the text, we recommend focusing on the strategies outlined above to build assignments and assessment approaches that emphasize academic honesty. 

Violations of Academic Integrity

At Puget Sound, violations of academic integrity are taken very seriously as they threaten the atmosphere of trust, fairness, and respect essential to learning and the dissemination of knowledge. Claiming the work of others as your own, whether created by another human or an artificial intelligence, is regarded as plagiarism, and as such is a violation of academic integrity. In situations involving suspected violations of academic integrity, procedures and sanctions established for the Hearing Board will be followed. Students are expected to be aware of and to abide by the university鈥檚 Academic Integrity Policy. Additionally, faculty members are urged to review university and course policies regarding academic integrity in their classes to ensure students clearly understand them.

This page was developed by the Office of the Academic Deans in robust collaboration with the Center for Writing and Learning, the Faculty Development Center, Collins Memorial Library, Educational Technology, and the Academic Standards Committee. It will be updated if and when any relevant policies change, as well as to add new resources and guidance. 

AI Literacy Micro-Modules

Below are three short primers, each designed to be read in about two minutes, that cover common pitfalls and best practices when working with generative AI.

Large鈥憀anguage models predict the next word rather than retrieve verified data; when the context runs thin they invent material. This hallucination risk can surface as fake citations, imaginary statistics, or fabricated quotations.

Why Does AI Hallucinate?

Generative AI systems produce inaccurate content for three main reasons:

  1. Training Data Sources: AI models learn from internet data that contains misinformation, so they reproduce falsehoods without checking facts.
  2. How Generative Models Work: These systems predict what comes next based on patterns, focusing on plausible-sounding content rather than accuracy.
  3. Design Limitations: The technology can't distinguish between true and false information, so it generates new inaccurate content by combining patterns.

AI hallucinations result from flawed training data, pattern-based generation, and fundamental technology limits.

How to keep control

  • Treat AI output as a starting point, to help explore data and evidence, never a definitive source.
  • Ask the model for DOIs or URLs, then check them; fake links break on inspection.
  • Cross鈥慶heck facts in databases or peer鈥憆eviewed literature.
  • Document any claim you keep so readers can verify it.

Take鈥慳way: always verify with an independent source.

Deeper dive: .

AI echoes its training data, which often favor dominant groups. This can appear as the omission of minoritized scholars, stereotyped language, or skewed sentiment analysis.

Why Does AI Produce Biased Content?

Generative AI systems produce biased content for three main reasons:

  1. Training Data Sources: AI models learn from internet data that reflects existing social and cultural biases from human-created content.
  2. How Generative Models Work: These systems reproduce patterns from their training data, including discriminatory language and perspectives they've observed.
  3. Design Limitations: The technology can't identify or correct biases, so it amplifies existing prejudices by combining biased patterns in new ways.

AI bias results from biased training data, pattern reproduction, and the inability to recognize discriminatory content.

Spot and reduce bias

  1. Prompt for multiple perspectives or under鈥憆epresented voices.
  2. Ask the model to list its main sources and examine their diversity.
  3. Compare AI summaries with scholarly reference sources or peer-reviewed sources.
  4. Note in your assignment how you checked for bias.

Bottom line: inclusive prompts and critical reading can help reduce the impact of algorithmic echo chambers.

Deeper dive: .

Public AI services often store prompts and reserve the right to review them. Some retrain on user data. Uploading sensitive material can violate privacy laws or research ethics.

Why Does AI Pose Privacy Risks?

AI systems create privacy concerns for users in three main ways:

  1. Training Data Collection: AI models are trained on vast quantities of data, much of which is composed of copyrighted material, and using copyrighted works to train AI models may constitute prima facie infringement.
  2. User Data Retention: Every query, instruction, or conversation with ChatGPT is stored indefinitely unless deleted by the user, including sensitive data like personal details and proprietary information.
  3. Inadvertent Data Exposure: There is a risk of inadvertently capturing and exposing sensitive user data, particularly in chatbots where personal or confidential information is often disclosed.

AI privacy risks stem from unauthorized use of copyrighted training data, indefinite storage of user conversations, and potential exposure of sensitive information shared with AI tools.

Safeguards

  • Strip names, IDs, and unpublished data.
  • Avoid uploading copyrighted texts.
  • Use university鈥慳pproved tools for institutional data.
  • Consult your course professor or faculty advisor regarding high鈥憆isk cases.

Key point: treat public AI tools like social media鈥攁nything you post could become public.

Deeper dive: .