top of page

The Final Frontier In Education: Is AI In Schools The End Or The Beginning?

Shamajae Bruner

Photo Illustration by Taylor Ensey
Photo Illustration by Taylor Ensey

As artificial intelligence becomes a fixture in classrooms, some teachers are raising concerns about how this new technology might impact student learning and engagement. 


Gemini, ChatGPT, Feymain Ai, Alexa, and even search engines such as Google may use generative AI. You likely use it sometimes without even realizing it. 


A 2023 Pew Research Center study found that a quarter of public K-12 teachers said using AI in schools does more harm than good. Some 32% said they believe it is equally harmful and beneficial. Six percent said it does more harm than good, and 35% said they were unsure. 


AI has many upsides, such as quickly and conveniently helping teachers make lesson plans during a busy week. However, it also has its downsides for students and teachers. 

One of the biggest potential downsides of using AI is making sure that you are receiving reliable information. 


“AI will never be useful until the human evaluates how well it is written,” said Wendy James, director of the North Institute for Teaching and Learning at Oklahoma Christian University and an expert on AI use in the classroom. 


A student or teacher could give AI a good prompt, but it's up to the person to determine whether the prompt and information it generates are good or unreliable. This is known as Human-in-the-loop (HITL). HITL is a model describing how humans work with AI systems to balance automation with human interaction. 


Universities such as Oklahoma Christian have AI policies to regulate the use of AI among students. James said the university has sample policies for professors to adopt, should they choose. They range from zero AI use — a difficult proposition because AI is embedded into many search functions and internet operations users take for granted — to AI use with disclosure of prompts and outputs. 


K-12 schools in many cases are behind universities in AI policy adoption. James said policies such as those at OC could benefit students and teachers and provide some clarity regarding the new technology in an uncertain time. Students will be able to know what use of AI is acceptable in the classroom. Teachers will be able to set that standard or expectation for students when it comes to assignments and AI use. And such policies could minimize students being falsely accused of using AI.


AI-driven cheating is becoming a serious issue in schools. AI can generate an answer in no time, and these answers may be indistinguishable from something a student wrote. Many teachers use AI detectors to detect cheating with the use of AI. However, free AI detectors can be more harmful than good, James emphasized. These detectors have a very high false positive rate. Paid detectors work much better, but they can be expensive for cash-strapped schools, she said. 


A major factor that many people haven’t considered when people use AI is intellectual property. Once you put your ideas or images into AI, they belong to the AI company. Your thoughts and ideas lose that sense of originality and may be used to train software models, something to take into account when considering using AI for creative works or original research. 


As AI expands in classrooms it will become more widely used by students in the future. But when AI generates information that is only right about 75% of the time, there has to be someone to fact check and verify whether the information is solid. 


James can envision a future in which AI forces students to deeply learn the material rather than spending time on less consequential things. 


“This critical way of being able to look at it will actually cause students to have to know the material better. But they will also learn how to look at something and determine right and wrong,” she said. 


bottom of page