STAFF OPINION: We need better conversations about AI

AI is the classroom boogeyman. That needs to change.

By Riley Martinez | May 1, 2024 2:00pm
rr-09328-1
by Ryan Reynolds / The Beacon

There’s a strange phenomenon happening in our classrooms. Artificial intelligence (AI) statements have made their way into our syllabi. And yet, conversations about AI in the classroom often go something like this: “Here’s this technology everyone is talking about. Don’t use it, or at least try not to. And if you do, cite it.” 

There seems to be a lot of (well-founded, I think) hand-wringing about the subject. But if the rapid development of AI and the dizzying speed at which it’s entered many aspects of our lives signals anything, it’s that we need to rethink these conversations. 

Full disclosure: I’m a skeptic — and a bit of a dinosaur — when it comes to new technologies. I’m sympathetic, and frankly unsurprised, that we concern ourselves mostly with the pitfalls of AI in higher education. 

There’s a worry that AI hinders the skills higher education is designed to develop. And if we assume most students who use AI are asking ChatGPT to write their essays, answer their quiz questions or generate discussion posts, then this is nearly certain. 

But when we admonish students like this, we’re really just convincing them not to cheat since most other forms of cheating similarly hinder students. And, realistically speaking, I’m not sure if our energy is best spent convincing students not to cheat. 

Another worry is that AI just isn’t very good. It cites fake sources, “hallucinates” information and, by any human standard, isn’t particularly insightful. On a good day, while it might help you see something you didn’t before, it can’t produce original ideas. 

But like any tool or technology, AI is better or worse depending on what you want it to do. We need to leverage AI skillfully, thoughtfully and ethically. And, in the classroom, that starts with honest, thorough conversations about its educational potential. 

For example, large language models (LLMs) can be the readers students don’t always have access to while they’re writing. They can provide feedback in varying levels of specificity and point out places in the text where the one might get confused — a crucial consideration that students often overlook. 

And, really, that’s just scratching the surface. We’re now finding that AI can be used to generate practice exams, help teach English to speakers of foreign languages and aid in reading comprehension. And I predict more developments are sure to come — it’s only a matter of time. 

Still, I can imagine someone saying there’s a danger of becoming too cognitively dependent on AI. But if we’re in danger of AI turning us all into idiots, it’s because we’re avoiding it like the plague — and eventually we will have been too late to develop the habit of using it well. 

How can we integrate AI into assignments? How can it help us teach? How can it help us learn? What can it show us about human intellectual strengths — or weaknesses? In my experience, there haven’t been enough questions like these in discussions about AI in the classroom. 

But I’m ready to ask them. And you should be, too. 

Riley Martinez is Copy Editor for The Beacon. He can be reached at martinri24@up.edu

Have something to say about this? We’re dedicated to publishing a wide variety of viewpoints, and we’d like to hear from you. Voice your opinion in The Beacon.

B