The rapid advancements in artificial intelligence (AI) have sparked significant discussion about its role in higher education. Questions around security, privacy, and ethical implications have emerged as key concerns. This guidance aims to help the UNK community navigate both the opportunities and risks associated with AI technologies. As this field continues to evolve, it will be essential to regularly update this guidance to reflect new developments.
General Guidelines
The following recommendations should be considered when using AI:
Human Oversight Always review the output of AI systems. This "human-in-the-loop" approach helps mitigate risks and ensures better outcomes by leveraging the strengths of both humans and AI.
Bias Awareness AI systems can reflect biases inherent in their training data. It is crucial to monitor outputs for bias and address any disparities that may arise.
Transparency Clearly acknowledge when AI-generated content or analysis has been used in your work.
Data Privacy and Protection Exercise caution when using nonpublic or sensitive data with AI systems, particularly personally identifiable information. Not all AI platforms guarantee adequate privacy protections. UNK has agreements in place for specific platforms that comply with privacy standards; consult the appropriate resources for guidance.
Data and Intellectual Property Be mindful of the type of information provided to AI systems, as some do not safeguard intellectual property or sensitive data adequately. Follow the Institutional Data Policy for handling such information securely.
University Policies All university policies regarding workplace behavior apply when using AI in university-related work. Employees are responsible for ensuring AI-generated outputs are used appropriately and align with institutional standards.
AI Use Cases
AI systems offer both opportunities and challenges when applied to professional tasks. It is important to understand the potential risks associated with different use cases and take appropriate measures to mitigate them. Below, use cases are categorized by their risk levels:
Low-Level Risk
AI use cases in this category are generally considered safe, provided they adhere to the general guidelines and comply with university policies. These applications are unlikely to pose significant issues and are suitable for most professional contexts.
Moderate-Level Risk
This category includes scenarios where AI use may introduce challenges or require careful oversight. The acceptability of these cases often depends on the specific circumstances. Key considerations include:
Data Protection: Ensuring nonpublic and sensitive data are safeguarded.
Intellectual Property: Verifying that the AI system respects ownership and rights.
Adhering to general guidelines and university policies is critical in managing these risks effectively.
High-Level Risk
Use cases in this category involve significant legal, compliance, or ethical concerns. These scenarios demand substantial oversight, careful planning, and strict adherence to guidelines. Examples may include:
Using AI with highly sensitive data or personally identifiable information.
Relying on AI systems for decision-making that impacts individuals or communities without human oversight.
Generative AI tools can be highly effective for various professional and academic tasks. However, their use should align with general guidelines, applicable university policies, and data protection standards. Below are examples of common scenarios and their associated risk levels.
Can I use AI to refine something I’ve written?
(Low Risk) – This is generally a safe and effective use of AI tools. Ensure that the revisions maintain the original meaning of your text and follow the general guidelines and applicable policies. When entering text, remove any identifying or sensitive information that is not publicly available.
Can I use AI to help brainstorm a list of ideas?
(Low Risk) – Using AI to brainstorm ideas is a productive and low-risk activity. Follow the general guidelines and policies to ensure proper use.
Can I use AI to summarize a document?
(Low Risk) – Summarizing documents with AI can be helpful for gaining an overview of a topic or text. However, the risk level may vary depending on the privacy requirements of the document. Be cautious, as AI-generated summaries might not be comprehensive or entirely accurate. Users should refer to the primary documents before making critical decisions related to the university.
Can I use AI to write an event announcement and develop event plans?
(Low Risk) – AI can effectively assist in drafting event announcements and plans. However, it is essential to verify that all event details in the AI-generated content are accurate before sharing or implementing.
Can I use AI to analyze data?
(Low Risk), (Medium Risk), (High Risk) – The risk level depends on the classification of the data and the privacy protections offered by the AI system. As a rule, restricted or critical data should not be entered into an AI tool unless you are certain it will not be saved or retained by the system. Even for data with privacy protection, all identifying information should be removed before uploading datasets for analysis. Refer to the Artificial Intelligence Policies and Processes page for specific guidance.
Can I use AI to analyze or combine survey responses?
(Low Risk), (Medium Risk), (High Risk) – Similar to data analysis, the risk level depends on the sensitivity of the survey responses and the system's data privacy safeguards. Restricted or critical survey data should not be used with AI tools unless you are confident the data will not be saved by the platform. Always remove identifying information from datasets before analysis.
Generative AI tools can assist with a variety of tasks in teaching and learning. However, their effectiveness and appropriateness depend on the context and how they are used. Below are common use cases, categorized by their associated risk levels.
Can I use AI to generate practice quizzes?
(Medium Risk) – AI can efficiently create quiz questions, including distractors and feedback. However, each question must be thoroughly reviewed for accuracy before use.
Can I use AI to edit or generate feedback on my own writing?
(Low Risk) – Generative AI platforms are effective at providing quick feedback on personal writing, provided there are no external restrictions (e.g., assignment rules prohibiting AI use). The more specific your instructions, the better the feedback. For prompt ideas, see Rob Lennon’s post.
Can I use AI to generate feedback for a student on an assignment?
(Medium Risk) – AI-generated feedback can be helpful but is not a substitute for teacher feedback. Do not upload student content to AI platforms without their consent and always review AI-generated feedback for accuracy before sharing it with students. Consider encouraging students to use AI themselves to generate feedback, as outlined in Ethan and Lilach Mollick’s work, New Modes of Learning Enabled by AI Chatbots: Three Methods and Assignments.
Can I use AI to translate class materials into another language?
(Low Risk) – AI tools can effectively translate class materials. It’s recommended to periodically check the translations with fluent speakers to ensure accuracy. Microsoft Translator is one option for this task.
Can I use AI to help advise a student on their future plans?
(Low Risk and High Risk) – Using AI as a brainstorming tool for career ideas or internships is low-risk if no personal or identifiable student data is shared. However, sharing a student’s degree audit or sensitive data with AI tools is high-risk and should be avoided.
Can I use AI to create vocabulary lists?
(Low Risk) – AI can generate personalized vocabulary lists tailored to different reading levels and interests. Visit AI for Education for an example prompt to accomplish this.
(Low Risk) – AI can help create scenarios for practicing writing, reading comprehension, critical thinking, or problem-solving skills. As with any AI-generated content, it’s important to review for accuracy, appropriateness, and alignment with your learning objectives.
Can I use AI to clarify concepts or simplify topics?
(Low Risk) – Generative AI excels at summarizing and simplifying complex topics. However, it’s important to review the output for accuracy and ensure it aligns with the intended meaning, especially when using it to support learning or share information with others.
Can I use AI to create simulations for historical events or settings?
(Medium Risk) – AI can generate engaging simulations, but it is essential to clarify the perspectives on the past and may contain inaccuracies. Unverified information, anachronisms, and biases—necessitated careful consideration, drawing on content matter expertise and knowledge of curriculum and class context. These tools can be valuable teaching aids if paired with clear fact-checking and context.
Can an AI act as an on-demand tutor?
(Medium Risk) – AI can provide personalized instruction on any topic, if the AI is deliberately designed and has structured prompts. Using the right prompt, any student with access to the internet can construct their own knowledge. A carefully created tutor prompt will ensure that AI knows its role, meets students where they are, provides explanations, and examples, and guides in open ways. However, users must critically analyze and cross-verify AI outputs with disciplinary expertise. See an example tutor chatbot.
Can I use AI for content creation?
(e.g., draft presentations and teaching materials)
(Medium Risk) – AI is a helpful tool for generating content but requires thorough review for accuracy, bias, and alignment with course objectives. Materials should not be used without human oversight.
Generative AI tools can assist with a range of administrative and professional activities. However, their appropriateness depends on the context, level of risk, and adherence to university policies. Below are examples of common use cases, categorized by their associated risk levels.
Can I use AI to evaluate competitive bid responses?
(High Risk) – This is not an appropriate use of AI. AI tools can introduce bias and inaccuracies, which are unacceptable in evaluating vendor proposals during the competitive bid process. Additionally, proprietary or protected information in RFP responses should never be entered into AI systems.
Can I use AI for market research?
(Medium Risk) – While AI can assist in market research, this is not its strength. AI-generated information must be rigorously fact-checked. It can serve as a complementary resource but should always be verified through human analysis.
Can I use AI to write a performance appraisal or aid in performance management?
(Medium Risk) – AI can assist in summarizing data or drafting language for performance appraisals, but all content must originate from the supervisor responsible for the review. Supervisors may use AI to refine constructive feedback or create development plans, but confidential, organizational, or personally identifiable information should not be input into AI tools. All outputs must be thoroughly reviewed to avoid bias or inaccuracies.
Can I use AI to review job applicants' resumes?
(High Risk) – This is not an appropriate use of AI due to the potential for bias in AI systems. Resume evaluation should be conducted by a recruiter or hiring team. AI tools may inadvertently discriminate based on protected characteristics, violating Equal Employment Opportunity standards.
Can I use AI to practice my interviewing skills?
(Low Risk) – Using AI to simulate interview scenarios or practice responses can be a beneficial and low-risk application. Resources are available to guide effective use of AI for interview preparation.
Can I use AI to write a position description or advertisement?
(Medium Risk) – AI can help articulate concise and inclusive language for job descriptions. However, the final draft should comply with university-specific requirements and policies, and all AI-generated content should be thoroughly reviewed.
Can I use AI to write a letter of reference?
(Medium Risk) – AI can assist with drafting or refining reference letters, but careful review is necessary to ensure accuracy, avoid bias, and tailor the letter to the individual. AI is best used to improve clarity or make an existing draft more concise.
Can I use AI to write a cover letter?
(Medium Risk) – AI can help generate or edit cover letters to save time. However, it is critical to fact-check the content and ensure the language is free of bias or inaccuracies. AI can assist in refining language for clarity and conciseness.
Can I use AI to evaluate applications for admissions?
(High Risk) – This is not an appropriate use of AI due to the potential for bias. Admissions decisions should be made by human evaluators to ensure fairness and accuracy.
Can I use AI to generate policies or office procedures?
(Medium Risk) – AI can serve as a starting point for drafting policies and procedures. However, all outputs should be reviewed and finalized by a subject matter expert to ensure accuracy and compliance.
Generative AI tools can assist with various research-related activities. However, their appropriateness depends on the context, the nature of the task, and adherence to privacy and compliance standards. Below are common use cases and their associated risk levels
Can I use AI to review grant proposals?
(High Risk) – This is not an appropriate use of AI. Some funding agencies, such as the NIH, have explicitly prohibited the use of AI for this purpose (see NIH Notice). AI systems may not provide the confidentiality or accuracy required for grant proposal review.
Can I use AI to perform a literature review?
(Medium Risk) – AI tools can assist in summarizing or organizing information for a literature review (see the AI tools web page for examples). However, all AI-generated results must be verified against primary sources to ensure accuracy and completeness.
Can I use AI to analyze research data?
(Low Risk), (Medium Risk), (High Risk) – The risk level depends on the classification of the data and the privacy safeguards provided by the AI system. Sensitive or classified data should only be used if the system ensures adequate protection and complies with institutional policies. Refer to UNK’s data privacy policies for more information.
Can I use AI to generate research ideas?
(Low Risk), (Medium Risk) – AI can effectively brainstorm research topics or explore innovative ideas. However, care should be taken to ensure the information used in prompts aligns with the privacy and data requirements of the AI system.
For a deeper understanding of AI usage on campus, please review the following resources: