Ethical Concerns Regarding Generative AI

Whether or not you choose to incorporate generative AI (GenAI) into coursework, it is important for both instructors and students to understand how these systems work and the ethical concerns they raise. Facilitating open, structured discussions about GenAI can help students develop the critical judgment needed to navigate tools they are likely to encounter in academic, professional, and everyday contexts.

These discussions might address issues such as the spread of disinformation, the lack of regulation governing AI companies, environmental and labor impacts, and questions of authorship and accountability. As members of the UCLA community, students and instructors should be equipped to recognize both the limitations and potential uses of GenAI tools, as well as the responsibilities that accompany their use.

Below are key factors to consider when evaluating the ethical use of generative AI:

Data Use and Consent: GenAI models are trained on vast dataset and often collect and store user data. In some cases, training data may have been used without the consent of the original creators. Numerous lawsuits have already emerged around copyright infringement and data use ("The Times Sues OpenAI and Microsoft Over Use of Copyrighted Work"  AND "Boom in A.I. Prompts a Test of Copyright Law").

Instructors and students should keep the following principles in mind:

  • Whenever possible, use commercially licensed or institutionally approved GenAI tools to reduce risks related to intellectual property rights infringement.

  • Consider carefully what information students are being asked to share when using GenAI tools in a classroom context.

  • Obtain explicit and informed consent before collecting, processing, or using personal data in GenAI systems.

  • Use anonymized datasets to minimize privacy risks, especially when working with personal or sensitive information. 

  • Never upload or share any student information covered under FERPA or other protections.

Copyright & Authorship: Generative AI complicates traditional understandings of authorship and ownership. Current U.S. copyright law does not grant copyright protection to content generated solely by AI, and the legal landscape continues to evolve. Students and instructors are encouraged to consult:

Cost & Limits to Access: While many GenAI tools offer free versions, advanced features often require paid subscriptions. These costs can create inequities in access, particularly in educational settings. Instructors should be mindful of whether assignments assume or require access to paid tools and consider alternatives to avoid disadvantaging students.

Bias and Representation: The datasets used to train GenAI tools may reflect historical, cultural, or structural biases, which can be reproduced or amplified in AI-generated outputs. As a result, these tools may generate content that is biased, exclusionary, or misleading (See: "ChatGPT is as Biased as We Are").

Students should be encouraged to ask: Whose perspectives are represented—or missing—in this output? How might bias in the training data shape the information presented? How does this content influence my understanding of the topic? Critical evaluation is essential when using GenAI outputs as part of any learning or decision-making process.

False Information & Hallucinations: GenAI tools should never be treated as primary or authoritative sources of information. The content generated by GenAI tools can contain “hallucinations”, not be accurate, and/or be out of date ("Chatbots May ‘Hallucinate’ More Often Than We Realize"). Additionally, the underlying models powering GenAI tools may have been trained with biased or partial/incomplete data ("Disinformation Researchers Raise Alarms about A.I. Chatbots"). Best practices include:

  • Verifying AI-generated information using reliable, independent sources.

  • Treating AI outputs as drafts or suggestions rather than factual claims.

  • Citing generative AI tools whenever their content is quoted, paraphrased, or incorporated into academic or creative work.

Energy & Environmental Impacts: The development, training, and use of GenAI systems require a significant amount of energy, consume large amounts of water for cooling, and contribute to carbon emissions. While some companies are working to reduce these impacts, AI use carries real environmental costs ("The Uneven Distribution of AI’s Environmental Impacts" and "Hungry for Energy, Amazon, Google and Microsoft Turn to Nuclear Energy"). Students and instructors should consider whether the use of GenAI is justified for a given task and whether more efficient or lower-impact alternatives are available. (See our guide “AI and the Environment: Considerations and Concerns” for more information.)

Labor Exploitation & Labor Harm: Generative AI systems rely heavily on human labor—both in the creation of training data and in the ongoing evaluation and moderation of outputs. Much of this labor is performed by contract or precariously employed workers who are often underpaid and exposed to disturbing content (“America Already Has an AI Underclass”, “Cleaning Up ChatGPT Takes Heavy Toll on Human Workers”, “AI needs to face up to its invisible-worker problem”). Ethical engagement with GenAI includes recognizing this hidden labor and considering how widespread use of these tools affects workers globally.

Adapted from materials from UCLA Teaching and Learning Center, Widener University Library, and Amherst College Library.