Do not use ChatGPT for any Research Work Until You Have Read and Understood the Following Rules Very Well

 Using ChatGPT for your thesis? Read this update now to avoid immediate rejection



When OpenAI released ChatGPT 3.5 in the latter part of 2022, social media exploded, and content creation skyrocketed. If you were in the academic field, you likely saw the phenomenon: the floodgates of article publishing were opened. Colleagues who were having a backlog of papers to submit quickly finalised their manuscripts and submitted them. The buzz then was not just about being able to generate content that read professionally, but also about the ability to get references attached. I am very much sure that none of them took the time to verify the information they got from ChatGPT, nor did they verify the authenticity of the references.

Then, one sunny afternoon, a colleague announced the introduction of AI detectors. The news sent a shiver down the spines of almost everyone. And as we anticipated, the emails started coming in like the news of a deceased family member. A person will get the email of his rejected manuscripts, and before he could inform the other, the other person's rejection email was in. It is okay to call it the rejection plague of 2023. Even in the few papers that were eventually published, the authors were so adamant about asking other people to read because, before you are done with the first sentence, the AI robot voice will be ringing in your head.


Let's not lie to ourselves, AI (in this case, I mean LLMs) has undergone a massive transformation, with most of them being able to generate natural human language difficult to tell apart from actual human writings (What is LLM? Check Google; that's what researchers do). But with each passing transformation comes new rules of the game, especially for journals and publishers who need to maintain their standards and ensure that academia is filled with accurate information. Every game has rules, and if you want to play the game, you must understand the rules. These are the current rules for publications. 

Using AI for research is like driving a very fast car. It can get you to your destination quickly, but if you do not know the rules of the road, you will crash. Before you generate one more word of text for your thesis or paper, you must understand the rules set by the big publishers. If you ignore these, your paper will be rejected, and your reputation could be damaged.



1. Journal Rules for Disclosure (Elsevier, Sage, etc.)

The biggest publishers in the world have made their stance very clear: AI cannot be an author. You cannot list "ChatGPT" or "Gemini" as a co-author on your paper because AI cannot take legal responsibility for the work. 

For example, if you are submitting to an Elsevier journal, you must be transparent. They allow you to use AI to improve your writing or generate ideas, but you must disclose it. You usually need to include a specific "Declaration of AI Use" statement at the end of your paper before the references. You must state exactly which tool you used and for what purpose (e.g., "ChatGPT was used to improve the readability of the abstract"). Note that Elsevier strictly prohibits using AI to create or alter scientific images or figures.

Another big player is Sage, which follows a similar path but distinguishes between "assistive" and "generative" use. If you just use AI to fix your grammar (assistive), you generally do not need to formally declare it, though it is good practice to be honest. However, if you use AI to generate new text, summaries, or ideas (generative), you must disclose it in your Methods or Acknowledgements section. The golden rule for both publishers is that you, the human, are 100% responsible for every word in the paper, regardless of whether AI helped you write it.

In the end, we cannot present the rules set by all journals, but the one we have here gives us a preview of the standardised rules for AI usage. Remember to also check the specific publisher guidelines on the website of the journal before you submit a manuscript. 

2. Rules for Information Cross-checking (The Hallucination Trap)


AI is not a search engine; it is a text predictor. This means it can "hallucinate", which is a fancy way of saying it lies with confidence. It can invent citations that do not exist, create fake statistics, and attribute quotes to the wrong authors.

To cross-check, you must never trust a citation given by AI blindly. If ChatGPT says "(Smith, 2019) argues that...", you must go to Google Scholar, type in that specific paper title, and verify it actually exists. If you cannot find the original PDF, delete the citation immediately. Additionally, check for "logical contradictions." AI often contradicts itself within the same response. Read the generated text line-by-line to ensure the argument flows logically and matches the reality of your field.

3. The Rule of Data Privacy (The Secret Data)


This is the rule most beginners break. You must never, under any circumstances, paste your raw research data into a public AI tool if it contains personal details about your participants. If you interviewed "Mr. Kojo from Kumasi" and you paste his interview transcript into ChatGPT to ask for a summary, you have just breached his confidentiality. Public AI tools often save your inputs to train their future models. Once you paste that private information, you lose control over it. Only feed the AI anonymised data, and by this we mean that you should remove all names, locations, and specific dates before you even ask for help with analysis.

And so What?




You might be wondering why we are taking so long to write down rules and warnings. Is it to keep you from using these great tools? No, the answer is no. We are not telling you to avoid AI; we are telling you to respect it. The difference between a researcher who gets published and one who gets rejected is often not about intelligence, but about integrity.

The rules from Elsevier and Sage are not there to punish you. They are there to protect the credibility of science. If you use AI secretly, you are risking your entire career for a few minutes of convenience. But if you use it openly, cross-check your facts, and protect your data, you turn a dangerous shortcut into a decisive professional advantage.

So, go ahead and use ChatGPT to brainstorm your next big idea, or even to write. Use it to polish your grammar or explain difficult concepts. But always remember that you are the pilot, and the AI is just the engine. The engine gives the power, but you must hold the steering wheel. If you let go of the wheel, no matter how fast you are going, you will crash. Be smart, be honest, and let your research stand on solid ground.





Comments

  1. Where’s the line between “AI-assisted” and “AI-written” work?

    ReplyDelete
  2. How common are AI hallucinations in academic contexts?

    ReplyDelete

Post a Comment

Popular posts from this blog

Research for Beginners: Selecting a Topic Using the New Google Scholar Lab

It's Not Always About ChatGPT: This is the Best AI for Selecting a Research Topic