As Chatgpt Scores B In Engineering, Professors Scramble To Update Courses
Students are increasingly turning to AI to help them with coursework, leaving academics scrambling to adjust their teaching practices or debating how to ban it altogether.
But one professor likens AI to the arrival of the calculator in the classroom, and thinks the trick is to focus on teaching students how to reason through different ways to solve problems, and show them where AI may lead them astray.
Over the past two years, Melkior Ornik, assistant professor in the Department of Aerospace Engineering at University of Illinois Urbana-Champaign, said both colleagues and students have fretted that many students are using AI models to complete homework assignments.
So Ornik and PhD student Gokul Puthumanaillam came up with a plan to see if that was even possible.
Ornik explained, “What we said is, ‘Okay let’s assume that indeed the students are, or at least some students are, trying to get an amazing grade or trying to get an A without any knowledge whatsoever. Could they do that?'”
The academics ran a pilot study in one of the courses Ornik was teaching – a third-year undergraduate course on the mathematics of autonomous systems – to assess how a generative AI model would fare on course assignments and exams.
There was still a significant disparity between the different types of problems that it could deal with or it couldn’t deal with
The results are documented in a preprint paper, “The Lazy Student’s Dream: ChatGPT Passing an Engineering Course on Its Own.”
“In line with our concept of modeling the behavior of the ‘ultimate lazy student’ who wants to pass the course without any effort, we used the simplest free version of ChatGPT,” said Ornik. “Overall, it performed very well, receiving a low B in the course.”
But the AI model’s performance varied with the type of assignment.
Ornik explained, “What I think was interesting in terms of thinking about how to adapt in the future is that while on average it did pretty decently – it got a B – there was still a significant disparity between the different types of problems that it could deal with or it couldn’t deal with.”
With closed-form problems like multiple choice questions or making a calculation, OpenAI’s ChatGPT, specifically GPT-4, did well. It got almost 100 percent on those sorts of questions.
But when deeper thought was required, ChatGPT fared poorly.
“Questions that were more like ‘hey, do something, try to think of how to solve this problem and then write about the possibilities for solving this problem and then show us some graphs that show whether your method works or doesn’t,’ it was significantly worse there,” said Ornik. “And so in these what we call ‘design projects’ it got like a D level grade.”
As Ornik sees it, the results offer some guidance about how educators should adjust their pedagogy to account for the expected use of AI on coursework.
The situation today, he argues, is analogous to the arrival of calculators in classrooms.
“Before calculators, people would do these trigonometric functions,” Ornik explained. “They would have these books for logarithmic and trigonometric functions that would say, ‘oh if you’re looking for the value of sine of 1.65 turn to page 600 and it’ll tell you the number.’ Then of course that kind of got out of fashion and people stopped teaching students how to use this tool because now a bigger beast came to town. It was the calculator and it was maybe not perfect but decently competent. So we said, ‘okay well I guess we’ll trust this machine.'”
“And so the real question that I want to deal with – and this is not a question that I can claim that I have any specific answer to – is what are things worth teaching? Is it that we should continue teaching the same stuff that we do now, even though it is solvable by AI, just because it is good for the students’ cognitive health?
“Or is it that we should give up on some parts of this and we should instead focus on these high-level questions that might not be immediately solvable using AI? And I’m not sure that there’s currently a consensus on that question.”
Ornik said he’s had discussions with colleagues from the University of Illinois’ College of Education about why elementary school students are taught to do mental math and to memorize multiplication tables.
“The answer is well this is good for the development of their brain even though we know that they will have phones and calculators,” he said. “It is still good for them just in terms of their future learning capability and future cognitive capabilities to teach that.
“So I think that this is a conversation that we should have. What are we teaching and why are we teaching it in this kind of new era of wide AI availability?”
Ornik said he sees three strategies for dealing with the issue. One is to treat AI as an adversary and conduct classes in a way that attempts to preclude the use of AI. That would require measures like oral exams and assignments designed to be difficult to complete with AI.
Another is to treat AI as a friend and simply teach students how to use AI.
“Then there’s the third option which is perhaps the option that I’m kind of closest to, which is AI as a fact,” said Ornik. “So it’s a thing that’s out there that the students will use outside of the bounds of oral exams or whatever. In real life, when they get into employment, they will use it. So what should we do in order to make that use responsible? Can we teach them to critically think about AI instead of either being afraid of it or just swallowing whatever it produces kind of without thinking.
“There’s a challenge there. Students tend to over-trust computational tools and we should really be spending our time saying, ‘hey you should use AI when it makes sense but you should also be sure that whatever it tells you is correct.'”
It might seem premature to take AI as a given in the absence of a business model that makes it sustainable – AI companies still spend more than they make.
Ornik acknowledged as much, noting that he’s not an economist and therefore can’t predict how things might go. He said the present feels a lot like the dot-com bubble around the year 2000.
“That’s certainly the feeling that we get now where everything has AI,” he said. “I was looking at barbecue grills – the barbecue is AI powered. I don’t know what that really means. From the best that I could see, it’s the same technology that has existed for like 30 years. They just call it AI.”
Ornik also pointed to unresolved concerns related to AI models like data privacy and copyright.
While those issues get sorted, Ornik and a handful of colleagues at the University of Illinois are planning to collect data from a larger number of engineering courses with the assumption that generative AI will be a reality for students.
“We are now planning a larger study covering multiple courses, but also an exploration of how to change the course materials with AI’s existence in mind: what are the things still worth learning?”
One of the goals, he explained, is “to develop a kind of critical thinking module, something that instructors could insert into their lectures that spends an hour or two telling students, ‘hey there’s this great thing it’s called ChatGPT. Here are some of its capabilities, but also it can fail quite miserably. Here are some examples where it has failed quite miserably that are related to what we’re doing in class.'”
Another goal is to experiment with changes in student assessments and in course material to adapt to the presence of generative AI.
“Quite likely there will be courses that need to be approached in different ways and sometimes the material will be worth saving but we’ll just change the assignments,” Ornik said. “And sometimes maybe the thinking is, ‘hey, should we actually even be teaching this anymore?'” ®
Speaking of education… President Trump today set up a task force to come up with plans to push AI into schools, from kindergarten to 12th grade.
The team will draft ways to “encourage and highlight student and educator achievements in AI, promote wide geographic adoption of technological advancement, and foster collaboration between government, academia, philanthropy, and industry to address national challenges with AI solutions,” the President added.
Earlier, US Education Secretary Linda McMahon confused AI with A1 at a public event.
A considerable amount of time and effort goes into maintaining this website, creating backend automation and creating new features and content for you to make actionable intelligence decisions. Everyone that supports the site helps enable new functionality.
If you like the site, please support us on “Patreon” or “Buy Me A Coffee” using the buttons below
To keep up to date follow us on the below channels.