Imagine you have an art history essay on Baroque era paintings due at midnight. It’s 11:30 p.m., and the deadline is quickly approaching. What if you could get your computer to write the essay for you in seconds?
With ChatGPT, that’s now a reality.
ChatGPT, which stands for chat generative pre-trained transformer, is an artificial intelligence chatbot created by the company OpenAI that, provided with a short prompt, can write and produce images at will. The program was launched in late November 2022, and, according to OpenAI CEO Sam Altman, had one million users within five days of its release.
New York City’s Department of Education banned ChatGPT from school devices and networks at the beginning of the year due to fears of cheating, NBC reported. Several schools and universities around the world have followed suit, while others are actively embracing the new technology.
LSU cybersecurity professor Golden Richard said he uses ChatGPT in class to demonstrate how far AI has come, but he doesn’t have any interest in incorporating it into his curriculum.
“I don’t think things like ChatGPT are going to push Stephen King aside, because it’s not clear that they have the depth of creativity that humans have… But in terms of… English composition… they’re generally accurate, and it’s something that students can simply just turn in,” Richard said. “And that’s a little bit terrifying.”
Richard said ChatGPT works by “scraping the internet” for data and looking for patterns that enables them to create text and images that mimic human work. He worries that code-writing AI like ChatGPT will have negative consequences for computer coding.
Richard thinks of Tesla’s self-driving car. Based on the articles he’s read, the car works well most of the time, but some critical coding errors have caused “pretty horrific accidents,” he said.
Richard said it’d be harder for students to use ChatGPT to cheat in his advanced coding classes, but it might be usable in other subjects.
“It’s always distressing to me when people sort of misuse the resources at their disposal at a university just to get through…and they don’t care about the fact that they’ve actually learned something or not,” Richard said. “There’s potential for [ChatGPT] to be an educational tool, but if you’re using it to circumvent learning processes at a university, then it’s just crazy.”
Mass communication professor Will Mari uses different types of AI for his classes. He calls Packback, a discussion-based AI, his “robot TA” because it helps him detect if AI was used in student responses.
Even without Packback, Mari said it’s still obvious if a student used AI to write their work.
“I can tell if a student has suddenly become really good at writing a B or B+ essay, because their human tone will be way off from the robot tone that often ChatGPT has, which is pretty monotone…So for now, at least, a fairly experienced human instructor can detect AI without another AI,” Mari said.
While Mari finds Packback useful for some things, he said his class mostly consists of projects, presentations and exams, which would be difficult to use AI for.
Mari also studies the history of technology and said people are reacting to ChatGPT in similar ways to how they’ve reacted to past technologies. For example, he said people were worried they’d forget how to spell with the invention of spell checker in the 70s.
Right now, Mari said we’re in the “panic” phase of ChatGPT.
“There’s often a panic cycle, followed by a gradual acceptance, followed by it becoming pretty mundane…I suspect that eventually this will become just another tool to use, and they’ll be less scary over time,” Mari said.
“I think my only concern really…is if students use [ChatGPT] to do the work that I think helps them become better writers…For my journalism students, my main worry will be they couldn’t handle sophisticated stories like news features or in-depth profiles and that kind of thing,” Mari said. “So, I think it would have hurt them more than it would hurt me.”
Lance Porter, a professor of social media branding and emerging media, said ChatGPT passes the Turing test “with flying colors.” The Turing test is used to determine a machine’s intelligence and the ability to differentiate its responses from human responses.
“You can’t tell that it’s written by a computer whereas in the past…there’s something off about it,” Porter said.
It’s important to remember, Porter said, that just because it’s an AI doesn’t mean it’s unbiased.
“It’s hard to tell where [ChatGPT’s] information comes from, how it’s sourced, how accurate it is,” Porter said. “It looks accurate when you read it, and that’s kind of a frightening thing…I think there’ll be people that say, ‘Oh, this has got to be the most accurate thing because it doesn’t have any bias when it’s written.’ But guess what, all the information it pulls from has bias. So, there’s always those issues.”
For better or worse, Porter said ChatGPT “is here to stay.”