AI Guidelines and Use Cases

The rapid advancements in artificial intelligence (AI) have sparked significant discussion about its role in higher education. Questions around security, privacy, and ethical implications have emerged as key concerns. This guidance aims to help the UNK community navigate both the opportunities and risks associated with AI technologies. As this field continues to evolve, it will be essential to regularly update this guidance to reflect new developments.

General Guidelines

The following recommendations should be considered when using AI:

  1. Human Oversight
    Always review the output of AI systems. This "human-in-the-loop" approach helps mitigate risks and ensures better outcomes by leveraging the strengths of both humans and AI.
  2. Bias Awareness
    AI systems can reflect biases inherent in their training data. It is crucial to monitor outputs for bias and address any disparities that may arise.
  3. Transparency
    Clearly acknowledge when AI-generated content or analysis has been used in your work.
  4. Data Privacy and Protection
    Exercise caution when using nonpublic or sensitive data with AI systems, particularly personally identifiable information. Not all AI platforms guarantee  adequate privacy protections. UNK has agreements in place for specific platforms that comply with privacy standards; consult the appropriate resources for guidance.
  5. Data and Intellectual Property
    Be mindful of the type of information provided to AI systems, as some do not safeguard intellectual property or sensitive data adequately. Follow the Institutional Data Policy for handling such information securely.
  6. University Policies
    All university policies regarding workplace behavior apply when using AI in university-related work. Employees are responsible for ensuring AI-generated outputs are used appropriately and align with institutional standards.

AI Use Cases

AI systems offer both opportunities and challenges when applied to professional tasks. It is important to understand the potential risks associated with different use cases and take appropriate measures to mitigate them. Below, use cases are categorized by their risk levels:

Low-Level Risk

AI use cases in this category are generally considered safe, provided they adhere to the general guidelines and comply with university policies. These applications are unlikely to pose significant issues and are suitable for most professional contexts.

Moderate-Level Risk

This category includes scenarios where AI use may introduce challenges or require careful oversight. The acceptability of these cases often depends on the specific circumstances. Key considerations include:

  • Data Protection: Ensuring nonpublic and sensitive data are safeguarded.
  • Intellectual Property: Verifying that the AI system respects ownership and rights.
  • Adhering to general guidelines and university policies is critical in managing these risks effectively.

High-Level Risk

Use cases in this category involve significant legal, compliance, or ethical concerns. These scenarios demand substantial oversight, careful planning, and strict adherence to guidelines. Examples may include:

  • Using AI with highly sensitive data or personally identifiable information.
  • Relying on AI systems for decision-making that impacts individuals or communities without human oversight.