College student put on academic probation for using Grammarly: ‘AI violation’::Marley Stevens, a junior at the University of North Georgia, says she was wrongly accused of cheating.
College student put on academic probation for using Grammarly: ‘AI violation’::Marley Stevens, a junior at the University of North Georgia, says she was wrongly accused of cheating.
Grammarly is a lot more than a spell checker. Here are some screenshots from their marketing page that specifically recommends using their product as a student.
Removed by mod
I feel the need to point out, this is exactly the same type of feedback you’d get from a competent proofreader.
But you still need to put the content in there. All it does is do the boring formatting stuff. The real crime is not teaching students latex.
Ehhh…
They integrated GPT last year. It’s entirely possible to use Grammarly in a way that raises academic integrity concerns nowadays.
Thats still a bullshit vague intro. Like you still need to feed it what you are introducing and ideally how you want to get there. Again. This depends if this is an English writing class or anything else. Cause the point of the essay is to convey the point, knowing you need an intro is the key point, writing something to get into your meat is 40% of the boring bullshit you need to write in a report, the other 40% is the conclusion and formatting. Using AI to streamline that is not cheating unless its an English writing class. these are tools you use to convey your point better. You need a point to begin with.
This is like saying calculators are gonna make math homework easier. Make better homework!
and its not like these AI detection tools arent snakeoil either.
Using AI to streamline that is cheating if and only if the course syllabus defines it as cheating.
Also, I hate to break it to you, but somewhere between “many” and “most” college classes are writing classes in disguise, depending on your major. The ability to write well is massively important, and generative AI is prohibited for the same reason that teachers try to make sure you understand arithmetic before letting you use a calculator. The key difference is that writing is subjective and way more complex, so the best teachers can aim for is continuous improvement.
I say all this as a college student who uses AI nearly every day. A good chunk of my peers absolutely misuse it.
Indeed, they are.
My point is this tool exist and you cant for certain say that someone is using this tool. If you want to give someone a real education find a way to make sure they learn despite that. If you end up using AI for a nuanced essay its not going to answer that properly and a teacher would grade that as sub par work. Good work with the AI would be to act as an editor and determining if whats said is accurate and if it should be in your paper. Bad work with the AI would be to not be an editor. There is still a job the students has to do and learn.
I say this as someone who grades work handed in by students.
I totally agree, there is rarely any way to tell if (and more importantly, to what extent) student-submitted writing is AI generated. We’re probably also pretty close to AI being able to generate outstanding work while mimicking your own writing style. For this reason, in my mind, the era of take-home writing assignments is coming to a close.
I’m actually okay with this, as it will hopefully force teachers to be more creative with and involved in the learning process. One of my biggest takeaways from 12 years of grade school was that homework trends over the last few decades are patently absurd, fueled in large part by lazy teaching. I see AI as a chance to finally correct that trend.
Oh yeah if its just basic homework then coursehero and those kinda sources are the problem and to me the bigger problem is a lot of homework is lazy, its just prove you’ve read the texbook shit. Homework needs to get more analytical.
i believe the oppisite should happen, like take home tests, the professor knows the student is going to open the textbook and copy so make the question such that we are evaluating you to think. if you’ve ever had an open book advanced math course then you’ll have an idea as to what homework should to become with more time window.
A calculator (in most cases) can’t just do a problem for you, and when it can those calculators are banned (the reason you can’t use ti84s on gen chem exams in college, or and 89x in a calc 1 class). Such a tool means that you really don’t have to understand to to get the answer. To me your comment reads that if I get the answer to a problem by typing it into wolfram alpha it’s the same as working through the problem on your own, as long as you understand how WA got there. I wholeheartedly disagree that somebody that is using wolfram alpha to get all of their answers actually knows jack shit about math, kinda like how anybody using generative AI for writing doesn’t have to know jack shit about the subject and just give a semi-specific prompt based on a small amount of prior research. It’s very easy for me to type into a GPT bot "write a paper on the social and political factors that led to the haitian revolution. It’s a completely different experience to sift through documents and actually learn what happened then write about that. I’m fairly confident I could “write” a solid paper using AI without doing almost any research if it’s a topic I know literally anything about. Eg: I don’t know very much about the physics of cars but I can definitely get generative AI to give you a decent paper on how and why increases in engine size can lead to an increase in efficiency just by knowing that fact to be true and proofreading the mess the AI throws together for me. The fact that you consider these tools the same as a calculator (which I might add that we still often restrict the use of, eg. no wolfram alpha on your multivariable final) is astounding to me tbh.
My point is the tool is out there and you cant definitively prove that someone used AI. So we better figure out how to use it and test with the assumption that someone’s using it. chatgpt is a fucking inaccurate mess. If you as a professor cant catch up with that youre using not doing your job. and using these AI detection tools is stupid and doesnt fix the problem. So what do we do now?