Scientific Writing, Reviewing, and Editing for Open-access TESOL Journals: The Role of ChatGPT

OpenAI’s launch of ChatGPT in November 2022 brought a groundbreaking development to academic writing, as this chatbot has demonstrated its ability to draft “high-quality” papers for submission to academic journals. Despite ongoing efforts to discourage or prohibit the publication of artificial intelligence (AI)-generated papers, ChatGPT’s influences on the scientific writing, reviewing, and editing processes for open-access journals cannot be ignored. This innovation presents a challenge to academic publishing: AI-produced papers can be difficult to differentiate from human-authored work, and tools for detection remain limited. This paper outlines obstacles in academic publishing following ChatGPT’s emergence. The academic community should ponder the implications of AI-generated papers and explore ways to manage such material in scientific writing and publishing


Introduction
Rapid advances in artificial intelligence (AI) have sparked a paradigm shift in how scientific articles are drafted and reviewed (Gilat & Cole, 2023). The birth of sophisticated tools such as ChatGPT has revolutionized scientific writing and review by offering a highly streamlined approach. This technology has the potential to produce "high-quality" scholarly papers, all but eliminating the need for traditional writing. However, as with any technological innovation, the infusion of AI tools in scientific writing and reviewing is accompanied by several pitfalls.
One challenge involves AI tools' capacity to adopt a humanlike writing style. Some reviewers and editors may struggle to distinguish papers authored by people and AI tools. This lack of differentiation could prompt the publication of substandard or even fraudulent research, calling into question the integrity of academic publishing.
Another concern pertains to the ethics of employing AI tools for scientific writing and review tasks. For instance, the intellectual property rights of authors on whose work AI tools depend are worth considering. Using these tools for scientific writing may even be akin to plagiarism-a known problem in academic publishing.
These issues underscore the need for the academic community to develop best practices for using AI tools in scientific writing. Steps can also be taken to protect creators' intellectual property and to mitigate plagiarism. In so doing, the academic community can ensure that AI tools are used responsibly in the pursuit of knowledge and innovation.
To enhance the quality of academic writing and publishing, it is essential to incorporate feedback from key stakeholders such as authors, editors, and reviewers. The potential impacts of tools such as ChatGPT on academic writing and reviewing cannot be fully understood without regarding parties involved in the publishing process. This paper summarizes feedback obtained from authors, reviewers, and guest editors of International Journal of TESOL Studies (IJTS). The study was conducted for two purposes: (1) to gain a deeper understanding of how ChatGPT could influence academic writing, reviewing, and editing; and (2) to identify how best to navigate these potential effects.

Method
A participant-oriented approach was taken in this study, featuring interviews with a purposive sample of people possessing research experience in computer-assisted language learning. The sample comprised IJTS authors, reviewers, and editors. These professionals were selected given their potential to provide rich insights and diverse perspectives on using ChatGPT. The four authors, three reviewers, and two guest editors each agreed to be surveyed.
Interview questions were intended to explore respondents' experiences with and knowledge of ChatGPT, their perceptions of the tool, and their opinions about academic publishing in light of ChatGPT's emergence. Interviews were held in either English or Chinese (based on respondents' preference) and were completed via virtual platforms such as Zoom, Tencent Meeting, or WeChat. This approach enabled a thorough investigation of respondents' views while allowing for convenient data collection.

Authors' comments
The authors shared positive and negative views of ChatGPT. Their criticisms are summarized below.
One problem is the potential for AI to generate inaccurate information. One negative and detrimental consequence of using ChatGPT is its potential impact on human nature. There is a risk that we may become overly reliant on machines and succumb to laziness.
This over-reliance could lead to a gradual decline in our ability to memorize, think critically, synthesize information, and generate language that is truly "humanistic" in nature. In the long term, we may even lose our capacity or desire to make decisions, as machines are capable of making "better", more "reasonable", and "rational" decisions based on their vast databases.

(Author 4)
These respondents also offered positive comments about ChatGPT, namely related to saving time, arranging text, and recasting material in an academic tone. They acknowledged AI's potential value for academic writing and publishing-as long as these tools are used transparently. I do see some benefits to using ChatGPT, such as its ability to check for grammar and spelling mistakes and generate well-written content in various writing styles. (Author 1)

I do believe that there is potential for a mutually beneficial relationship between human beings
and ChatGPT, as the tool may require our knowledge to improve its language system. (Author 2)

AI learns faster and more systematically than humans. I myself gradually form the habit of asking Chat GPT first before I want to know or search for something. The answers may not be accurate but it gives me a clear direction for where to look for. (Author 4)
These authors appeared to have experience using ChatGPT. Despite recognizing some of the tool's strengths (e.g., saving time and possibly producing better-quality content), they voiced concerns about its potential to generate inaccurate information, fake articles, and even replace human authors and publishers entirely. They further noted the importance of using AI tools responsibly in academic writing and publishing.

Reviewers' comments
Reviewers' remarks captured multiple challenges arising from AI tools' penetration into academic publishing. These respondents argued that difficulties telling apart AI-and human-generated papers could complicate the review process. The reviewers also contemplated who might be responsible for identifying AI-produced content. They commented on the high costs associated with publication as well.
As a reviewer, I find it challenging to differentiate between papers that are AI-generated versus those authored by human beings. This is particularly difficult given the limited time we have to review each paper and the potential for AI-generated papers to mimic the writing style of human authors. (Reviewer 1) Personally, I do not have the technical knowledge or expertise to detect the difference between AI-and human-generated papers. This can be a concern when reviewing papers, as it may lead to potential biases or errors in the review process. (Reviewer 2) In my opinion, it is not the responsibility of the reviewers to identify AI-generated papers. We are providing a voluntary service to the publisher, yet they charge exorbitant fees for publishing papers. It is important for publishers to take on this responsibility and ensure that AI-generated papers are clearly identified and labeled in the publication process. (Reviewer 3) These insights emphasize transparency and responsibility in deploying AI tools to facilitate academic publishing. Robust guidelines are clearly needed to address these concerns.

Editors' comments
Editors shared interesting ideas about using ChatGPT for academic editing.
In my experience, ChatGPT-generated papers lack the depth and insight of papers written by human authors. They tend to be overly general and lack the informative quality that is required for high-quality academic publishing. (Editor 1) I think that future editors will face significant challenges in dealing with ChatGPT-generated content. This is still an unknown area, and it is difficult to predict how AI tools like ChatGPT will continue to shape the landscape of academic publishing. However, I believe that editors will have an important role in ensuring that the submitted papers meet the standards of quality and rigor that are required in academic publishing. (Editor 2) The interviewed editors generally expressed concerns about the quality of ChatGPT-produced papers, with Editor 1 stating that such papers lack depth, tend to be overly generic, and are poorly informative. Editor 2 predicted that editors would find ChatGPT-generated content troublesome but will nonetheless have a part to play in safeguarding the quality of academic publishing.

Discussion and Concluding Remarks
Overall, the use of AI tools in scientific article writing and reviewing is expected to have substantial impacts in the coming years. Reviewers and editors must remain vigilant to guarantee that scientific standards are maintained. Although some policies for detecting AI-generated papers have been drafted, few tools are available to implement these regulations (Hu, 2023). The academic community should adopt policies that ensure transparency and accountability in scientific writing and reviewing. Editors and reviewers should also be given training and support in addressing AI-generated material. As readers, reviewers, and editors, we must be cognizant of inappropriate AI use and continue to evaluate papers on the bases of scientific accuracy, validity, and originality. AI tools' ongoing evolution is sure to dynamically alter academic publishing. Now is only the beginning.