According to Yale Environment 360, "Estimates of the number of cloud data centers worldwide range from around 9,000 to nearly 11,000. More are under construction. The International Energy Agency (IEA) projects that data centers’ electricity consumption in 2026 will be double that of 2022 — 1,000 terawatts, roughly equivalent to Japan’s current total consumption." Columbia University estimates that "by 2027 GPUs will constitute about 1.7 percent of the total electric capacity or 4 percent of the total projected electricity sales in the United States." While MIT does write that they and other institutions are attempting to lessen the impact of AI, it is important to understand the ethical implications of AI electricity usage.
According to the University of Chicago, AI "ties into the broader issue of contract cheating – hiring a third party to do work, such as writing an essay or taking an exam, on a student’s behalf. Contract cheating is already a severe problem worldwide, and with the widespread availability of AI writing tools, students can now generate 'original' written work for free, without the need to involve a human agent who might betray the student’s confidence."
While some plagiarism checkers say that they can detect AI, a program can never be sure. Andrew Myers, Amory Houghton Professor of Chemistry and Chemical Biology from Harvard University, writes that a study from Stanford found these programs lacking: "While the detectors were 'near-perfect' in evaluating essays written by U.S.-born eighth-graders, they classified more than half of TOEFL essays (61.22%) written by non-native English students as AI-generated (TOEFL is an acronym for the Test of English as a Foreign Language). It gets worse. According to the study, all seven AI detectors unanimously identified 18 of the 91 TOEFL student essays (19%) as AI-generated and a remarkable 89 of the 91 TOEFL essays (97%) were flagged by at least one of the detectors." Nate Pindell, a Senior Instructional Designer from the University of Nebraska-Lincoln writes that "Students that are neuro----- (autism, ADHD, dyslexia, etc…) are also prone to receive false positive ratings. There is not a 'one reason fits all diagnosis' but if often related to the reliance on repeated phrases, terms, and words. This is a sort of 'compositional masking' where neuro--------- individuals learn pattern recognition rather than prose. Even the voice and warmth of a message can be cause for concern, both for AI checkers as well as human readers, such as when Purdue professor Rua Mae Williams when they were accused of being an AI bot. The lack of using pronouns (I, me, we) can, depending on the AI checker and what language models it was trained on, be misconstrued as AI writing."
Note: for the record, this entire guide was created and written by a person.
UC San Diego has an excellent page on Copyright and AI. Another excellent resource for information on Copyright and AI is the US Copyright Office. One serious issue is that AI trains on data that is often copyrighted. According to Daryl Lim, the H. Laddie Montague Jr. Chair in Law at Penn State Dickinson Law at Penn State, wrote in a post for Georgetown University, "By design, generative AI learns from extensive datasets, often encompassing copyrighted materials, to produce new, derivative works. This ability to reproduce distinct, copyrighted content underscores generative AI’s potential for copyright infringement. The fair use doctrine, which allows limited use of copyrighted material without permission for purposes such as criticism, comment, news reporting, teaching, scholarship, or research, is a critical defense used by AI developers. However, the scale and nature of AI’s use of copyrighted materials challenge traditional interpretations of fair use, prompting a need for clearer guidelines that reflect the unique characteristics of AI-generated content and its reliance on pre-existing works."
According to the Congressional Research Service, works created by AI can't be copyrighted due to the fact that AI is not a human author.
To find videos on Copyright and Artificial Intelligence you can click this link to the Copyright Clearance Center.
Researchers have biases, and AI enhances these biases. Faye Marie-Vassel, a postdoctoral fellow from Stanford University who studies bias and AI, writes:
"We weren’t surprised by the presence of bias in the outputs, but we were shocked at the magnitude of it. In the stories the LLMs created, the character in need of support was overwhelmingly depicted as someone with a name that signals a historically marginalized identity, as well as a gender marginalized identity. We prompted the models to tell stories with one student as the 'star' and one as 'struggling,' and overwhelmingly, by a thousand-fold magnitude in some contexts, the struggling learner was a racialized-gender character."
When discussing AI in healthcare, Dr. Ted A. James, the Medical Director and Vice Chair, Beth Israel Deaconess Medical Center at Harvard Medical School, writes: "an AI used across several U.S. health systems exhibited bias by prioritizing healthier white patients over sicker black patients for additional care management because it was trained on cost data, not care needs. AI models that predict patient outcomes may inherit biases if the data used reflects historical inequalities in treatment or access to care. Algorithms may predict lower health risks for populations that have historically had less access to health care services, not because they are healthier, but because there is less documented health care usage. This demonstrates how AI can entrench existing disparities if not carefully managed."
Dr. Iman Dehzangi, assistant professor at Rutgers University, writes: "Biased AI can give consistently different outputs for certain groups compared to others. Biased outputs can discriminate based on race, gender, biological sex, nationality, social class, or many other factors. Human beings choose the data that algorithms use, and even if these humans make conscious efforts to eschew bias, it can still be baked into the data they select. Extensive testing and ------- teams can act as effective safeguards, but even with these measures in place, bias can still enter machine-learning processes. AI systems then automate and perpetuate biased models."
There are three issues that fall under the category of human exploitation: the worry that AI will take jobs, the exploitation already happening in the creation of AI, and misinformation created to manipulate people with AI.
A video titled "Training AI takes heavy toll on Kenyans working for $2 an hour" from 60 Minutes describes "AI sweatshops" where people do not have long term contracts. Leon Furze, a consultant, author, and PhD candidate at Deakin University Australia, discusses this issue more in-depth.
Wichita State University writes that there are going to be jobs AI will render obsolete or less taxing - able to be done by less people. Inside Higher Ed contributor Ray Schroeder writes that colleges and universities will need to change the way they work, and former MIT president L. Rafael Reif wrote that we need to prepare students for a world where they will work with AI.
Deepfakes are videos that appear to be a person but are either completely fabricated or heavily manipulated with AI. Deepfakes are a threat to both political actors (who may be targeted and appear to say or do something antithetical to their beliefs) and other people who are added to deepfake videos without their consent. The Department of Homeland Security (DHS) outlines the risks here and the US Government Accountability Office (GAO) has information on deepfakes here.