Generative AI

Interim Guidance on Data Uses and Risks of Generative AI 

Introduction

Generative artificial intelligence (AI) language models, including products like ChatGPT, Microsoft Co-Pilot, and Bard, are powerful tools that can assist with various tasks from teaching, learning, and research to writing support and data analysis. Michigan State University has not currently endorsed any generative AI product as being compliant with sensitive data requirements or regulations – any use of generative AI tools, including those provided by vendors through MSU accounts or those found on the approved software list, must adhere to this interim guidance, as well as any more specific guidance particular to your field and all relevant laws and regulations.  

Users who choose to use any generative AI tools should understand the potential risks and limitations associated with them. This interim guidance outlines recommendations regarding the types of data that may and may not be entered into consumer or commercial generative AI products, with specific considerations for higher education, MSU policies, and institutional needs. It also offers an overview of limitations to be aware of when using generative AI and offers some current best practices for working with these tools.   

Further guidance regarding more specific needs like handling generative AI in teaching, learning, and research activities, selecting and adopting AI tools, creating sample syllabus language, and more will follow in the coming months as MSU continues to explore how most effectively to leverage these new tools in a way that meets the university’s needs while keeping our data and users safe.  

Expectations regarding data stewardship

All data use must comply with all state and federal laws and institutional regulations and requirements, including MSU’s acceptable use and institutional data policies. Ethical considerations in alignment with the university’s mission, vision, and values must also be considered. Although generative AI products may claim to have some privacy safeguards in place, users should assume that all consumer generative AI products make data publicly available unless otherwise indicated per explicit official agreement with Michigan State University.  

In addition to the expectations above, specific types of data should be handled in different ways when using a generative AI product:  

  1. Public data: Generative AI can safely process publicly available information, general academic concepts, and non-sensitive data. Use of public data must still comply with MSU’s policies and be considered relative to its ethical and reputational implications. 
  2. Confidential or private data: Do not enter confidential data, including, but not limited to, social security numbers, contact details, name/image/likeness, and any information covered by FERPA, HIPAA, or other regulations into any generative AI product without documented approval from MSU IT Governance, Risk, and Compliance (GRC), as well as express written consent from any other necessary parties
  3. Research data: Researchers must consider the nature and sensitivity of scholarly data before using generative AI to support research. Do not put data that are confidential, contain sensitive information, or are subject to specific legal or ethical requirements (e.g., human subjects’ data) into any generative AI without proper anonymization and evaluation of potential risks, as well as express written consent from any other necessary parties without documented approval from MSU IT GRC, as well as express written consent from any other necessary parties. 
  4. Intellectual property: As questions around intellectual property and the use of generative AI are unresolved, the MSU community must avoid inputting proprietary or confidential information into generative AI, including unpublished research findings, internal university data or documents, or any information protected by intellectual property rights without express written consent from all stakeholders. 
  5. ITAR, EAR, and CUI Compliance: Strictly avoid using generative AI for data that falls under International Traffic in Arms Regulations (ITAR), Export Administration Regulations (EAR), and Controlled Unclassified Information (CUI). Researchers must ensure they do not intentionally or inadvertently use generative AI to process, store, or transmit data that could be classified under these regulations. The mishandling of such data through generative AI platforms could lead to serious legal and security implications.

Additional risks and limitations of generative AI

Beyond considering requirements around different types of data inputs, users of generative AI should be aware of other risks and limitations related to the output generated by these products.  

  1. Misinformation and inaccuracies: Generative AI may generate responses that are not always accurate or up to date. Users should independently verify the information provided by generative AI, especially when it comes to specific facts or rapidly evolving subjects.  
  2. Bias and unintentional harm: Generative AI can inadvertently reflect biases present in the training data. It is crucial to critically evaluate and contextualize the responses generated by generative AI to ensure fair and unbiased information dissemination.  
  3. Inappropriate content: Although generative AI providers may have made efforts to filter out inappropriate content, generative AI may produce or respond to content that is offensive, inappropriate, or violates ethical standards.  
  4. Algorithmic implications: AI can deduce and infer algorithmic criteria other than original intent. This situation can lead to or exacerbate potential bias through the inclusion or reweighting of unintentional variables.  

Current recommended practices

For MSU users who elect to use generative AI tools that are not governed by a formal agreement with the university, we recommend the following practices.  

  1. Consider carefully: Thoroughly anticipate the impact of using generative AI before entering information. What are the types of data you would be entering, and how does this interim guidance factor into that use? Remember that once data is entered into a generative AI tool, neither you nor the institution can directly remove it. Furthermore, be cautious of claims by product providers: there is no consistent definition of or criteria around vendor statements of using “AI” in a product.  These “AI” enhanced products may not offer superior performance to other non-AI products, nor perform equally well, in that they may not have the same standards for output.   
  2. Think critically: Evaluate and corroborate information obtained from generative AI by consulting additional sources or seeking expert advice such as that provided by MSU Libraries.  
  3. Understand expectations: When using generative AI for writing support, review the requirements of the publisher / reviewer / recipient of the finished product regarding disclosure requirements. Some journals, offices, publishers, or educators may require that generative AI be cited in work, potentially in different ways.  
  4. Exercise caution: Software feature release models mean that new Generative AI products or features may be released either to the public or even within the MSU environment without undergoing a formal review. In any such case, it is advised to assume that the most restrictive reading of this data guidance applies to the tool unless expressly communicated otherwise by Michigan State University. 
  5. Stay informed: Follow conversations around AI technology and further updates and guidance from the university to incorporate the latest improvements and address any emerging risks as generative AI continues to develop.  

Microsoft Bing Chat Enterprise / MSU O365 Co-Pilot Use 

Microsoft Bing Chat Enterprise (also referred to as Microsoft Co-Pilot) is available to all MSU users through the university’s O365 license. Users who wish to use Microsoft Bing Chat Enterprise / Co-Pilot should be aware of the following: 

  1. Since Microsoft released this tool as part of existing licensing, it has not yet gone through the formal review process that is part of our standard procurement process. As with any such tool, caution is advised. 
  2. Pending further evaluation, Microsoft Bing Chat Enterprise should not be considered safe with any data types restricted above. In particular, MSU IT does not currently endorse Microsoft Bing Chat Enterprise as compliant with any of these regulations: 
    1. HIPAA 
    2. FERPA 
    3. NIST 800-171

By adhering to these guidelines, our institution can begin to leverage the benefits of generative AI while mitigating potential risks and ensuring its responsible use in the context of higher education. 

_________________________________________________

Other MSU Guidance

Students

Educators

Marketing and Communication

Research

 

Updated 2/19/24