Down Arrow Button Icon: Design & Functionality
In the 1980s and 1990s, if a high school student was down on their luck, short on time, and looking for an easy way out, cheating took real effort. You had a few different routes.You could beg your smart older sibling to do the work for you, or, a la Back to School (1989), you could even hire a professional writer.You could enlist a daring friend to find the answer key to the homework on the teachers’ desk. Or, you had the classic excuses to demur: my dog ate my homework, and the like.
The advent of the internet made things easier, but not effortless. Sites like CliffNotes and LitCharts let students skim summaries when they skipped the reading. Homework-help platforms such as GradeSaver or CourseHero offered solutions to common math textbook problems.
The thing that all these strategies had in common was effort: there was a cost to not doing your work. Sometimes it was more work to cheat than it was just to have done the work yourself.
Today, the process has collapsed into three steps: log on to ChatGPT or a similar platform, paste the prompt, get the answer.
Experts, parents and educators have spent the past three years worrying that AI made cheating too easy. A massive Brookings report released Wednesday suggests they weren’t worried enough: the deeper problem, the report argues, is that AI is so good at cheating that its causing a “great unwiring” of their brains.
The report concludes that the qualitative nature of AI risks-including cognitive atrophy, “artificial intimacy” and the erosion of relational trust-currently overshadows the technology’s potential benefits.
“Students can’t reason. They can’t think. They can’t solve problems,” lamented one teacher interviewed for the study.
The findings come from a yearlong “premortem” conducted by the Brookings Institution’s Center for Universal Education, a rare format for brookings to use, but one they said they preferred to waiting a decade to discuss the failures and successes of AI in school. drawing on hundreds of interviews, focus groups, expert consultations and a review of more than 400 studies, the report represents one of the most thorough assessments to date of how generative AI is reshaping student’s learning.
“Fast food of education”
Table of Contents
The report, titled “A New Direction for Students in an AI World: Prosper, Prepare, Protect,” warns that the “frictionless” nature of generative AI is its most pernicious feature for students. In a conventional classroom, the struggle to synthesize multiple papers to create an original thesis, or solve a complex pre-calculus problem is exactly where learning occurs. By removing this struggle, AI acts as the “fast food of education,” one expert said. It provides answers that are convenient and satisfying in the moment, but overall cognitively hollow over the long term.
While p“`html
Concerns are growing regarding the integration of artificial intelligence into education, prompting discussions about ethical frameworks, student well-being, and the appropriate role of AI as a tool to enhance, rather than replace, human learning and judgment. Efforts are underway to establish guidelines for responsible AI implementation in classrooms, focusing on literacy, privacy, and preventing manipulative practices.
U.S. Department of Education and AI Integration
The U.S. Department of Education is actively addressing the integration of AI in education, emphasizing the need for responsible implementation and equitable access. The department recognizes AI’s potential to personalize learning and improve educational outcomes, but also acknowledges the risks associated with bias, privacy, and accessibility.
Detail: In November 2023, the Department of Education announced new resources and funding opportunities to support the responsible use of AI in schools. These initiatives aim to help educators understand and leverage AI tools effectively, while also protecting student data and promoting digital equity. The department’s focus extends to ensuring AI tools are accessible to students with disabilities and those from underserved communities.
Example: The Department of Education hosted a National Dialog on Artificial Intelligence and Education, bringing together educators, researchers, and policymakers to discuss the challenges and opportunities presented by AI in education.
AI Bill of Rights and Student Privacy
The White House’s Blueprint for an AI Bill of Rights outlines principles for protecting individuals from harmful AI practices, including those impacting students. These principles emphasize the importance of safe and effective AI systems, algorithmic transparency, and data privacy.
Detail: The AI Bill of Rights, released in October 2023, addresses five key principles: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives, consideration, and fallback. These principles are intended to guide the growth and deployment of AI systems across various sectors, including education.
Example: The principle of data privacy, as outlined in the AI Bill of Rights, directly relates to student data protection.Schools utilizing AI tools must ensure compliance with the Family Educational Rights and Privacy Act (FERPA) and other relevant privacy regulations. FERPA grants parents certain rights with respect to their children’s education records,including the right to review and correct the records,and the right to have some control over the disclosure of personally identifiable information.
Federal Trade Commission (FTC) Oversight and “Manipulative Engagement”
The Federal Trade Commission (FTC) is actively investigating AI companies, including OpenAI, to assess their data privacy and security practices, and to prevent deceptive or unfair practices, including “manipulative engagement.”
Detail: In october 2023, the FTC launched an investigation into OpenAI, focusing on whether the company made false or misleading claims about its AI models and their capabilities. The investigation also examines OpenAI’s data security practices and whether they adequately protect user information. The FTC’s concerns extend to the potential for AI systems to be used to manipulate or exploit consumers, including students.
Example: The FTC’s enforcement actions against other tech companies demonstrate its commitment to protecting consumers from harmful AI practices. As a notable example, the FTC has taken action against companies using deceptive dark patterns to manipulate users into sharing their data. This precedent suggests the FTC will scrutinize AI companies’ practices to ensure they are not exploiting vulnerabilities in user behavior.
Common Sense Media and Holistic AI Literacy
Organizations like common Sense Media are advocating for “holistic AI
