
ChatGPT, Google Bard, and Jasper are just a few of the AI chatbots that have revitalized the landscape of technology within the past year. Since January 2023, when ChatGPT reached over 100 million users, it has continued to grow beyond legislature and industries could adapt. With the Writers Guild strike fighting to protect their jobs from AI, demanding increased protections from technology that creates off the foundation that was learned from humans, we’re in uncharted territory. This technology sparked increase conversation of artificial superintelligence (ASI), which is when this technology could surpass the intelligence of even the best and brightest humans and solve problems that may even go beyond the comprehension of humans. We’re just now considering what AI means for creative professions with the WGA strike calling for protection against AI using their work to create, but what does this look like in education?
AI powered learning platforms can differentiate learning for students. It can provide accommodations for students faster than teachers could with an individualized approach catered to each student’s situation in seconds. It could also provide students with answers faster than a Google search would. While teachers change questions from assignments gathered from the internet to prevent students from looking up the answers, AI could answer these questions by learning from a wide array of sources online and predict an outcome that is very accurate. It could write a student’s essay, modify it, and adapt it for them to sound like a student wrote it. It changes the landscape of education, and we must lean into the technology and not away. The harder that education pushes away from future AI, the less prepared our students will be for leveraging these tools that others are comfortable embracing in this new landscape of learning and working. Also, the harder education fights, the more they’ll realize that they’ll lose.
This isn’t saying that the calls to pause AI by individuals and organizations like Louis Rosenberg, Elon Musk, WGA, Steve Wozniak, and other individuals are the wrong move.
Technology is developing faster than we could have ever imagined, both in our ability to control and regulate it, but the question is do we actively call to pause AI, or do we seek to stop all AI? In education, it often feels like many are seeking to stop AI.
Here’s a few of what I’ve seen:
- Switching to pen and paper essays to prevent students from using ChatGPT to write essays.
- Removing phone and computer access from all classroom assignments because “they cannot be done with integrity” and “students will use AI”.
- Putting content filters on ChatGPT access using school Wi-Fi.
With Google, Microsoft, Apple, Meta, Amazon, and more leaning fully into AI and the goal of school to prepare students for college and career readiness, removing AI chatbots completely in education is the opposite of preparing students for the future. Lean into AI but teach students how to utilize AI to support their learning, rather than replace their learning. Utilize the tools, but change your instruction to support its implementation, rather than trying to change your instruction to “prevent” use of AI. Students will use it anyways.
In My Classroom…
students have historically spent a day researching the differences between two chemistry topics and creating an argumentative pitch for or against its use based on its impact on the environment. For example, the process to create synthetic fertilizer (Haber-Bosch Process) is necessary to sustain crop production for our growing population but is detrimental to the environment. It typically takes a day for students to research and synthesize the research. I could approach these two ways:
- Actively avoiding. Change instruction so that students receive printed materials to read and cannot use their phones and laptops for a day, and then generate their pitch.
- Leaning in. Change instruction so that students utilize Google Bard to ask it to learn from various sources, synthesize a summary, and present it with the sources. Students use the sources provided to check for accuracy (AI has been known to hallucinate information and present inaccurate information). Then, students write a prompt to ask it to generate a pitch based on “x” information that students give the AI model. It generates a pitch and students practice presenting the pitch in their own words.
In scenario 2, my students have cut the amount of time needed to look up sources and instead, used AI to provide sources that they still have to look at to check the information for accuracy. Then, it creates a pitch that they reword in their presentation for or against the use of synthetic fertilizers. This is the future of our workforce and I believe that it is critical for students to learn how to use these tools to keep up with current technology.

I think about the scenario where people struggle to use a new learning management system and refuse to use it. When it’s adopted fully, they’re always trying to catch up just to learn it and spend hours navigating a resource that colleagues that learned it from the start are now creating and innovating with. Falling behind now means that our students will be one step behind those that have educational experiences that leaned into it.
So, my opinion? Lean in. Embrace it. Question its use in the classroom and the integrity of student work, but never actively work towards preventing it because it’ll be implemented one way or another. Regulations will come, but it’s far easier to cut back use when we begin to learn how to regulate it than it is to try and catch up to everyone who is innovating and creating because they’ve tried and failed in the past. So… adapt lessons for AI and towards AI, but not against it.
Leave a comment