Wesleyan University Generative AI Usage Policy 

Purpose 

Wesleyan University aims to balance the innovative potential of generative AI (AI) with the need to protect sensitive information, uphold academic integrity, and comply with existing laws and university policies. As the use and applications of AI continue to evolve rapidly, this policy provides guidance for the responsible use of AI tools within the Wesleyan University community. This policy is not exhaustive, but represents work towards a thorough and comprehensive policy. 

Scope 

This policy applies to all faculty, staff, students, and affiliates of Wesleyan University who collect, use, or share university information in AI tools and platforms. This includes information as defined in the university’s Data Security and Privacy Protection Policy (DSPPP) as “Public,” “Confidential,” and “Restricted,” as well as information generated during research. 

Policy 

Compliance with other university policies 

This policy does not supersede any other university policies.  Users must comply with all relevant Wesleyan University policies, as well as applicable state and federal laws, when using generative AI tools. This includes, but is not limited to, Wesleyan University policies related to academic integrity and the honor code, copyright and intellectual property, and data governance and security.  

Types of data that can be used in generative AI software 

The university’s Data Security and Privacy Protection Policy (DSPPP) provides guidance on determining whether information (whether institutional or research-related) is “Public”, “Confidential”, or “Restricted”. 

Only information classified as “Public” may be entered into publicly available generative AI tools (see Appendix). If the tool has been reviewed and approved by Wesleyan ITS, users may enter information classified as “Confidential” into the generative AI tool. Any information classified as “Restricted” cannot be entered into any generative AI tool without explicit written approval from the Chief Information Security Officer and the relevant Cabinet member. 

Researchers, students, affiliates may collect, use, and share information as part of their research with these tools. It is the user’s responsibility to review the terms and conditions of the software’s user agreement before uploading their research information to ensure their use of AI complies with all relevant data protection and privacy laws, especially when handling sensitive or confidential research data, and also that the user understands whether the AI tool is allowed to ingest supplied data to expand the AI tool’s model. 

Adding generative AI to existing university systems 

Prior to procuring generative AI tools, users must consult with the university's ITS department to ensure these tools undergo a thorough risk assessment. This also covers generative AI tools that may integrate into any Wesleyan system, platform, or tool which also require review and approval by Wesleyan ITS.   

Potential for incorrect data from generative AI tools 

AI Generated content may be misleading, fabricated, or false (commonly called hallucinations). Users are responsible for verifying the accuracy of content generated by AI tools and ensuring that it does not infringe on copyright or contain misleading information. 

Procurement 

Prior to procuring generative AI tools, users must consult with the university's ITS department to ensure these tools undergo a thorough risk assessment. If an institutional license is desired, users should follow the procurement procedures as outlined in the Vendor Management Policy. Requests for non-institutional licenses should be submitted by filling out a Request ITS Support ticket in the ITS Help bucket in WesPortal. 

 

Appendix 

As defined by the Data Security and Privacy Protection Policy (DSPPP), Public Data can be used in any publicly available generative AI tool. This includes but is not limited to: 

As defined by the DSPPP, Confidential Data can only be used in the following generative AI tool: 

As defined by the DSPPP, Restricted Data cannot be used by any generative AI tools without prior approval. 

Examples of research information that should not be used in any generative AI tool are information that the user has a contractual, legal, or regulatory obligation to protect.  Examples include, but are not limited to, information covered by the Family Educational Rights and Privacy Act (FERPA), Social Security Numbers (SSNs), and Protected Health Information (PHI). 

 

These links were consulted during the drafting of this policy: 

https://huit.harvard.edu/ai/guidelines 

https://www.boisestate.edu/policy/generative-artificial-intelligence-ai-use-and-policies/ 

https://its.uchicago.edu/generative-ai-guidance/ 

https://it.wisc.edu/generative-ai-uw-madison-use-policies/ 

 

 

Revised 08/06/2024