Generative AI can lighten routine administrative work, from drafting correspondence to summarizing meetings, but its use with university data must stay deliberate and transparent. This section outlines practical applications, privacy safeguards, the approval path for new tools, and links to training resources so staff can adopt AI responsibly.

Practical Examples of AI Use Cases

  • Use AI tools to generate first drafts of standard communications such as reminders, status updates, and meeting summaries, being sure to check, edit, and update AI output.
  • Convert long meeting transcripts into concise bullet‑point summaries highlighting key decisions, action items.
  • Analyzing qualitative data to find themes and patterns for further review and verification.
  • Ask AI tools to help solve coding problems or create complicated spreadsheet formulas.
    Ìý

General Administrative Tasks

Use Case
Description

Email Drafting & Editing

Quickly draft replies, meeting recaps, reminders, or policy summaries—then review and edit.

Meeting Notes & Summaries

Use AI to clean up or condense meeting transcripts or notes.

Form Letter & Template Generation

Draft consistent communications for outreach, student services, financial aid, etc.

Ìý

Data, Planning, & Reporting

Use Case
Description

Survey Summary Analysis

Paste open-ended survey results and get a first-draft summary of themes (then validate!).

Report Outlines

Generate first-draft structures or content suggestions for annual reports or strategy documents.

Brainstorming Session Support

Use AI to synthesize brainstorming notes or create idea categories.

Data Privacy image

Data‑Privacy Cautions

Never paste FERPA‑protected or HR‑sensitive data (personal identifiable information of students, faculty, or staff) into public AI tools. For restricted cases, request an on‑premise solution through TS.

Use approved systems. Where sensitive data is required, always request on‑premise or institutionally-approved AI systems through TS to ensure compliance.

Get guidance if uncertain. When in doubt, consult with Technology Services.

Using Approved AI Tools
Tool Status Table
Do I Need Approval?
Request a New AI Tool
AI Literacy Micro-Modules

Below are three short primers, each designed to be read in about two minutes, that cover common pitfalls and best practices when working with generative AI.

Large‑language models predict the next word rather than retrieve verified data; when the context runs thin they invent material. This hallucination risk can surface as fake citations, imaginary statistics, or fabricated quotations.

Why Does AI Hallucinate?

Generative AI systems produce inaccurate content for three main reasons:

  1. Training Data Sources: AI models learn from internet data that contains misinformation, so they reproduce falsehoods without checking facts.
  2. How Generative Models Work: These systems predict what comes next based on patterns, focusing on plausible-sounding content rather than accuracy.
  3. Design Limitations: The technology can't distinguish between true and false information, so it generates new inaccurate content by combining patterns.

AI hallucinations result from flawed training data, pattern-based generation, and fundamental technology limits.

How to keep control

  • Treat AI output as a starting point, to help explore data and evidence, never a definitive source.
  • Ask the model for DOIs or URLs, then check them; fake links break on inspection.
  • Cross‑check facts in databases or peer‑reviewed literature.
  • Document any claim you keep so readers can verify it.

Take‑away: always verify with an independent source.

Deeper dive: .

AI echoes its training data, which often favor dominant groups. This can appear as the omission of minoritized scholars, stereotyped language, or skewed sentiment analysis.

Why Does AI Produce Biased Content?

Generative AI systems produce biased content for three main reasons:

  1. Training Data Sources: AI models learn from internet data that reflects existing social and cultural biases from human-created content.
  2. How Generative Models Work: These systems reproduce patterns from their training data, including discriminatory language and perspectives they've observed.
  3. Design Limitations: The technology can't identify or correct biases, so it amplifies existing prejudices by combining biased patterns in new ways.

AI bias results from biased training data, pattern reproduction, and the inability to recognize discriminatory content.

Spot and reduce bias

  1. Prompt for multiple perspectives or under‑represented voices.
  2. Ask the model to list its main sources and examine their diversity.
  3. Compare AI summaries with scholarly reference sources or peer-reviewed sources.
  4. Note in your assignment how you checked for bias.

Bottom line: inclusive prompts and critical reading can help reduce the impact of algorithmic echo chambers.

Deeper dive: .

Public AI services often store prompts and reserve the right to review them. Some retrain on user data. Uploading sensitive material can violate privacy laws or research ethics.

Why Does AI Pose Privacy Risks?

AI systems create privacy concerns for users in three main ways:

  1. Training Data Collection: AI models are trained on vast quantities of data, much of which is composed of copyrighted material, and using copyrighted works to train AI models may constitute prima facie infringement.
  2. User Data Retention: Every query, instruction, or conversation with ChatGPT is stored indefinitely unless deleted by the user, including sensitive data like personal details and proprietary information.
  3. Inadvertent Data Exposure: There is a risk of inadvertently capturing and exposing sensitive user data, particularly in chatbots where personal or confidential information is often disclosed.

AI privacy risks stem from unauthorized use of copyrighted training data, indefinite storage of user conversations, and potential exposure of sensitive information shared with AI tools.

Safeguards

  • Strip names, IDs, and unpublished data.
  • Avoid uploading copyrighted texts.
  • Use university‑approved tools for institutional data.
  • Consult your course professor or faculty advisor regarding high‑risk cases.

Key point: treat public AI tools like social media—anything you post could become public.

Deeper dive: .

Professional Development

AI Literacy Course (MIT RAISE – Responsible AI for Social Empowerment and Education)

  • What it covers: AI's role in society, how it impacts communities, bias & fairness.
  • Why it works: Especially relevant in a liberal arts context.
  • Cost: Free

Learn Prompting

  • What it covers: Beginner-to-intermediate prompt writing, creative and analytical use.
  • Why it's good: Community-driven and easy to navigate; includes education-specific examples.
  • Cost: Free

Prompt Engineering Guide ()

  • What it covers: Structured prompt techniques, templates for summarizing, classifying, and ideating.
  • Why it's helpful: Staff can learn practical applications for support, communication, and documentation.
  • Cost: Free

AI + Higher Ed Resources (EDUCAUSE)

  • EDUCAUSE maintains curated articles, webinars, and frameworks for higher ed professionals navigating AI adoption.