Nearly a year ago, on Nov. 30, 2022, ChatGPT – the first publicly available artificial intelligence – debuted, taking the internet by storm.
Ever since its release, many teachers and administrators have placed bans on any artificial intelligence use for any and all assignments. Yet at the same time, many see AI as a tool that can be used in moderation to assist in teaching and learning, including at the university.
“Saying, ‘Just use it however you want’ or banning it altogether, those two extremes aren’t really enough,” Jennifer Trivedi, assistant professor in the anthropology department, said. “We have to have a conversation in the middle ground about it.”
Trivedi has been using an AI system called Packback in her asynchronous ANTH101 class. Through the program, her 380 students get live AI feedback as they write, including grammatical checks, prompts to break up their sentence or paragraph structures and suggestions to add images or citations. That’s something Trivedi “couldn’t possibly do as they’re all writing.”
She also taught a lesson in the course about the ethical concerns surrounding AI and its use, focusing particularly on issues with sources and biases when navigating the media.
“Students are in a world now where AI is a thing,” Trivedi said. “They’re using it in some classes and being told not to use it in other classes, and they’re gonna go out into careers where I know some employers are having people use it, and some employers are saying don’t use it. So how do you make smart choices about when and where it’s acceptable to use it in that context?”
She believes that there are spaces in which AI can be appropriate, such as in the medical field when scanning through a great deal of data. However, Trivedi also emphasized awareness about potential problems with its use, including misinformation, disinformation and hallucination (which is when AI makes up sources).
“It’s trained on human data, so it’s going to be reproducing biases that humans have in that data that it’s trained on,” she said.
She presented the ethical issues that she’s teaching about to the Board of Trustees in October because she believes that professors and administrators “need to be mindful of how [they] talk to students about [AI].”
If AI is used correctly, Trivedi believes it could make a difference in learning.
“If you want students to think creatively or think critically and write their own assignments because you want them to actually do the thinking work, you’re not concerned with where they get in the end,” Trivedi said. “The process of thinking is the point of the assignment.”
Joshua Wilson, an education professor at the university, has a similar perspective and argues that AI could give teachers more room and time to build actual connections with their students.
“I wonder if the impetus now will have to be almost like, as AI gets to be more AI, we need to be more human,” Wilson said. “Students who come to us, whether they’re in pre-K, 12th grade or whether they’re in college, they should know that the teacher is there to care.”
Wilson has worked to get funding to develop the use of AI in middle school science classes and has been integrating lessons about AI into his own teaching. A big issue he thinks AI could address is the “bleak” national writing performance in high schools.
“Writing assessment is hard for teachers because it takes so much time,” Wilson said. “They might believe writing is important and that we should help students learn to write but … from the survey data, we know that teachers don’t assign much writing.”
Wilson explained that while students need a lot of good feedback, many teachers are not trained well to provide it.
“I’m starting to slowly introduce it because I want teachers to start to think about ways that AI can help them in their job,” Wilson said.
In a recent class session, Wilson started by teaching his graduate students about how writing develops and how to identify strengths and weaknesses in student writing. He then had his students take that text and put it into ChatGPT, prompting it to identify the same characteristics and to write a learning objective in “ABCD format,” which is used in special education and is “very measurable.”
While ChatGPT was not able to correctly format at first, it was able to follow all other instructions. Once they clarified with ChatGPT what ABCD format was, it “did a really nice job” and was “pretty accurate and pretty consistent.”
“But unless you have gone through that process of learning about how writing develops, and you have the ability to confirm and make reasonable decisions about what the AI is giving you, you don’t want to be dependent on it,” Wilson said. “You want to use it as a tool.”
Wilson is also the co-chair of the task force on AI for Teaching and Learning at the university, which was officially designated as a working group in May.
He laid out their goals for the next couple of years, including helping to educate campus and guiding policymaking by ensuring the policies are informed by “sound practice and good understanding of what the technology is, its capabilities and its limitations.”
The working group will explore and provide guidance as an advisory body for policies on AI surrounding pedagogy, curriculum development, assessment of student learning, academic integrity and research ethics. From the original four co-facilitators, they have grown to a membership of more than 25 faculty and staff from various parts of the university.
“We’re making sure that we’re involving people who will be affected by this,” Wilson said. “We basically recognize that all of us who are entering the workforce, are in it or about to enter the workforce need to know something about AI, because it’s just going to be ubiquitous.”