08 July 2024: Clinical Research
Awareness and Perceptions of ChatGPT Among Academics and Research Professionals in Riyadh, Saudi Arabia: Implications for Responsible AI Use
Wajid Syed 1ABCDEFG*, Adel Bashatah 2ABCDEFG, Kholoud Alharbi2ABCDEFG, Safiya Salem Bakarman3ABCDFG, Saeed Asiri 2BCDFG, Naji Alqahtani 2ABCDFGDOI: 10.12659/MSM.944993
Med Sci Monit 2024; 30:e944993
Abstract
BACKGROUND: Chat Generative Pre-Trained (ChatGPT) Transformer was created by OpenAI and has a powerful tool used in research. This study aimed to assess the awareness and perceptions of ChatGPT among researchers and academicians at King Saud University, Riyadh, Saudi Arabia.
MATERIAL AND METHODS: A self-administered cross-sectional study was conducted among academicians and researchers from November 2023 to March 2024 using electronic questionnaires prepared in Google Forms. The data were collected using the Tawasul platform, which sent the electronic questionnaires to the targeted population. To determine the association between variables, the chi-square or Fisher exact test was applied at a significance level of <0.05. To find predictors of use of ChatGPT, multiple linear regression analysis was applied.
RESULTS: A response rate of 66.5% was obtained. Among those, 60.2% (n=121) had expertise in computer skills and 63.7% were familiar with ChatGPT. The respondents’ gender, age, and specialization had a significant association with familiarity with ChatGPT (p<0.001). The results of the multiple linear regression analysis revealed that there was a significant association between the use of ChatGPT, age (B=0.048; SE=0.022; t=2.207; p=.028; CI=0.005-0.092) gender (B=0.330; SE=0.067; t=4.906; p=.001; CI=197-.462) and nationality, (B=0.194; SE=0.065; t=2.982; p=.003, CI=.066-.322).
CONCLUSIONS: The growing use of ChatGPT in scholarly research offers a chance to promote the ethical and responsible use of artificial intelligence. Future studies ought to concentrate on assessing ChatGPT’s clinical results and comparing its effectiveness to those of other ChatGPT and other AI tools.
Keywords: Artificial Intelligence, Awareness, Behavioral Research, Culture, Teaching
Introduction
A language model chatbot called Chat Generative Pre-Trained (ChatGPT) Transformer was created by OpenAI and has since grown to be a powerful tool with a variety of uses in a range of industries, including research, development, and academic writing [1–5]. According to the literature, ChatGPT was initially introduced as a cloud computing platform in November 2022 [5,6]. Studies have revealed that shortly after ChatGPT launched, it had several million monthly active users, setting a record as the consumer application with the quickest growth ever [7].
Despite being computer software, a chatbot stimulates and analyses spoken or written human communications after being trained by utilizing enormous libraries. People can converse with a machine in the same way they converse with other people [5]. Chatbots are essentially conversational tools that, depending on what they are created for, can help people with a variety of tasks; examples include voice chatbots like Alexa, Siri, and Google Assistant [5]. The goal of ChatGPT and a number of its sophisticated counterparts is to follow human conversational directions and provide detailed responses [5]. A previous study, however, showed that GPT models are made to produce phrases, paragraphs, and full papers in a fashion that is coherent and consistent with human language. The primary strength of GPT models is their capacity for pre-training on massive volumes of text data and then fine-tuning on particular downstream tasks, including text categorization or question-answering. Pre-training entails training the model on a sizable corpus of text data in an unsupervised manner, such that the model does not need any explicit labels or annotations for the training data (such as web pages or books) [8]. Furthermore, as a language model, ChatGPT has shown potential in various medical and scientific applications, such as identifying research topics, assisting in clinical and laboratory diagnosis, and providing updates on new developments in healthcare [9,10].
ChatGPT is a relatively new concept, and not much literature was identified in this field, particularly for health care professionals and researchers’ attitudes and perceptions about ChatGPT. However, in Saudi Arabia, the literature revealed that only 18.4% (n=1057) of health care professionals used it in their practice, but most expressed interest in using it in the future. In addition, the literature also revealed that most healthcare professionals were comfortable with incorporating ChatGPT into their healthcare practice. In the United States reports show that many workers are turning to ChatGPT to help with basic tasks, such as drafting emails, summarizing documents, and doing preliminary research [11]. In addition, 28% of respondents to an online poll on artificial intelligence in July 2023 said they regularly use ChatGPT at work, while only 22% said their employers explicitly allowed such external tools [11]. However, earlier reports suggested that companies and organizations are restricted from using ChatGPT [11] because using ChatGPT has several drawbacks and it occasionally generates answers that are correct but do not make sense [12]. This phenomenon is known as “hallucination” and is characteristic of language models like ChatGPT [13]. Additionally, ChatGPT is only somewhat familiar with events that took place after September 2021 [14]. Furthermore, a recent Artificial Intelligence Index assessment on the global acceptability of AI and services included Saudi Arabia among the top 3 nations with the most favorable perceptions [15]. Therefore, this study aimed to assess the awareness and perceptions towards ChatGPT among academicians and researchers in Saudi Arabia.
Material and Methods
STUDY DESIGN AND SETTING:
Before data collection, the Institutional Review Board at King Saud University, College of Medicine approved the study protocol (reference no. 23/0629/IRB/, dated 24 August 2023). In addition, informed consent was obtained from participants before completing the questionnaires. Academicians and research professionals were informed about the study and assured that the responses they provided would be kept confidential. No individuals were pressured into providing an answer, and participants could withdraw from the study at any time. This cross-sectional study was conducted at King Saud University, among researchers and academicians working in various colleges affiliated with KSU in Riyadh, Saudi Arabia. The data collection was carried out between December 2023 to March 2024. The data were collected using a simple random sampling technique. The inclusion criterion was researchers and academicians who are currently working in the College of Pharmacy, nursing, medicine, dentistry, and Applied Medical Science. Individuals who did not meet the inclusion criterion and researchers and academicians who are new to the university without having at least 1 year of experience were excluded.
SAMPLE SIZE ESTIMATION:
The sample size was calculated by the online calculator (
STUDY INSTRUMENT/QUESTIONNAIRE:
A questionnaire was developed and modified according to the needs of this study [16–18]. Part 1 of the questionnaire collected demographic information, including age, gender, professional classification, and college they belong to. Part 2 of the study contained a total of 5 items on respondents’ computer skills, familiarity, comfort, and perceptions towards ChatGPT. All these questions were open-ended and multiple choice. Part 3 of the study consisted of 6 items and collected information on awareness of ChatGPT using binary answers and multiple-choice answers. Part 4 collected information about the participant’s beliefs about use of ChatGPT. All these questions were assessed on a 5-point Likert scale ranging from strongly agree to strongly disagree. The last part of the study collected information on the perceived obstacles to using ChatGPT in healthcare practice, with a total of 10 items measured on a binary scale (Yes/No). After the initial draft of the questionnaires, it was subjected to pilot testing among randomly selected researchers and academicians (n=30), and the pilot responses were statistically evaluated. The Cronbach’s alpha value for reliability and consistency was assessed. The value of Cronbach’s alpha of 0.75 suggested that the questionnaire was valid and reliable to use in the study.
DATA COLLECTION PROCEDURE:
A pre-structured and validated final questionnaire was individually distributed to the academicians and research professionals through the university questionnaire center, Tawasul. For data collection, a Google form containing electronic questionnaires was prepared and submitted to the Tawasul platform through email. The Tawasul system identified the targeted population and sent invitations to participate in the study through university email. To achieve the maximum number of responses, continuous emails and reminders were sent. For data collection, convince sampling was applied. The academicians and research professionals were assured that the responses would be used only for research purposes, confidentiality would be maintained throughout the study, and they could withdraw from the study at any time. The questionnaires were collected upon completion.
STATISTICAL ANALYSIS:
Data obtained from the respondents were recorded and documented using SPSS, version 27. Descriptive statistics, such as percentages (%) and frequencies (n), were calculated. To find out the association between variables, the chi-square or Fisher exact test was applied at a significance level of <0.05. Similarly, to find the predictors of use of ChatGPT, multiple linear regression analysis was applied.
Results
SOCIODEMOGRAPHIC CHARACTERISTICS:
A total of 302 questionnaires were returned by the respondents, but 15.57% (n=47) questionnaires did not meet the inclusion criteria, and 17.9% of questionnaires were incompletely answered (n=54), leaving a response rate of 66.5% (n=201). Among the respondents, 54.2% (n=109) were age 31–35 years old, most were male (69.7%), 43.8% were researchers and 68.7% were at the College of Nursing. The detailed frequencies of the demographic and some features of the respondents are given in Table 1.
According to findings, 60.2% (n=121) of the respondents had expertise in computer skills, and 63.7% were familiar with the term “ChatGPT”. When asked how comfortable they would be using ChatGPT in their healthcare practice, 155 (77.1%) revealed they would be somewhat comfortable (Figure 1). In addition, 80.1% (n=161) said they were interested in reading about ChatGPT and other AI models. The detailed responses of academicians and research professionals, computer skills, familiarity, comfort, and perceptions about ChatGPT are given in Table 2.
Table 3 details the awareness about ChatGPT – 20.9% (n=42) of them reported they had used ChatGPT in their research, while 80.1% (n=161) had accessed or signed up for ChatGPT and a similar percentage of the respondents had asked a query of ChatGPT. Furthermore, 76.6% (n=154) of them reported that ChatGPT positively influenced their professional career (Figure 1). In addition, 68.7% (n=138) were planning to make substantial changes in their career plans because of ChatGPT and other similar AI Chatbots.
Regarding participant’s beliefs about ChatGPT, 68.7% had a positive attitude about this technology, and 88.1% said that language models (eg, ChatGPT) could increase productivity. There was a strong belief (85.6%) that language models can have a positive impact on education, but more than two-thirds of participants were neutral about trusting language models to handle customer service or act as an interface for government agencies. In the survey, 66.7% of participants strongly agreed that language models could generate biased or discriminatory content. It was strongly feared by 71.1% of respondents that language models might generate incorrect or nonsensical content. There was a neutral opinion among half of the participants regarding the possibility that people might become too dependent on these types of technologies. There was strong agreement among 68.2% of participants that these systems might become intelligent or sentient and pose a threat to humans, while 62.7% were concerned about potential job losses as a result of these technologies. Furthermore, 49.3% of the participants had a neutral opinion that these technologies have more advantages than disadvantages (Table 4).
In regards to perceived obstacles to the use of ChatGPT in the field of healthcare, 93.5% believed there was a lack of credibility/unknown source of data in ChatGPT, while a similar percentage were afraid of making harmful or wrong medical recommendations. Most participants said they did not have a ChatGPT option in their settings, and 87.6% thought AI chatbots were not well-developed. In addition, 88.1% of those surveyed believed medical-legal implications and not knowing what AI model can be used in healthcare are obstacles to using ChatGPT. Detailed information about the perceived obstacles to using ChatGPT in healthcare practice is described in Table 5.
The associations between academician and researchers ‘ familiarity towards ChatGPT with factors such as gender, age, year of study, specialization, and professional classification were determined using chi-square and Fisher exact tests at the <0.05 level (Table 6). Results showed that gender and age had a significant association with familiarity with ChatGPT (p<0.001), suggesting that gender and age are significant predictors of familiarity with ChatGPT. Also, other comparisons, such as specialization and professional classification with familiarity towards ChatGPT had a significant association (p<0.001). The associations between familiarity with ChatGPT and respondents’ characteristics are shown in Table 6.
To determine the relationship between the use of ChatGPT and respondents’ age, gender, nationality, classification, and profession, a multiple regression linear model was utilized in which age, gender, nationality, classification, and profession were considered as explanatory variables and use of ChatGPT as the dependent variable. The results of the multiple linear regression analysis revealed that there was a significant association between the use of ChatGPT, age (B=0.048; SE=0.022; t=2.207; p=.028; CI= 0.005–0.092) gender (B=0.330; SE=0.067; t=4.906; p=.001; CI= 197–.462), and nationality, (B=0.194; SE=0.065; t=2.982; p=.003, CI=.066–.322), as shown in Table 7, and all other variables were not significantly associated with the use of ChatGPT.
Discussion
There have been few studies that evaluated awareness and perceptions about ChatGPT among academicians and research professionals in Riyadh, Saudi Arabia, and there is scant literature about awareness and perceptions towards ChatGPT among academicians and research professionals in Saudi Arabia and internationally [19,20]. Most of the relevant studies were conducted among healthcare workers [10,21]. The present study may make an important contribution to the scientific literature in terms of awareness, familiarity, and present and future use of ChatGPT among the studied population in Saudi Arabia and other countries, and could serve as a reference for much-needed future studies. Educational institutions may also use the results to create relevant programs that increase professionals’ and researchers’ understanding of the applicability of ChatGPT and other AI technologies. Furthermore, no data are available to evaluate the views and experiences of academicians and researchers. Such studies are desperately needed to close the information gap about the use of AI chatbots in academic writing and to offer guidance for the creation of new policies and solutions.
Our study revealed that most of the respondents were familiar with the term ChatGPT, but few were very comfortable in using ChatGPT. In addition, 80.1% of them agreed that this survey increased their interest in reading about ChatGPT and other AI models. With regards to the utilization of ChatGPT in research, 20.9% of them used it and 76.6% said that ChatGPT is likely to help their professional career. These findings are similar to those of earlier studies of researchers and healthcare professionals [16–18]. For instance, Abdelhafiz et al found that 67% of researchers had heard of ChatGPT, but only 11.5% had used it in their research, primarily for rephrasing paragraphs and finding references. Interestingly, over one-third of the researchers in a previous study indicated they supported listing ChatGPT as an author in scientific publications [16]. Similarly, another study among healthcare professionals revealed that 18.4% had used ChatGPT, while most of the non-users of ChatGPT expressed interest in utilizing AI chatbots in the future. In addition, previous findings also reported that 75.1% of health care professionals were comfortable with incorporating ChatGPT into their healthcare practice [17]. A 2023 study by Bodani et al found that approximately 77% of respondents were familiar with ChatGPT, but 51.4% of them did not use it frequently [18].
In the present study, more than half (68.7%) of the academicians and researchers demonstrated positive attitudes towards ChatGPT, and only 3.0% of them reported negative attitudes. The current findings agree with previous findings published elsewhere. For instance, a study in Egypt by Abdelhafiz et al reported a positive attitude and revealed that 42.5% of respondents were willing to use it in the future, and 51.5% of the researchers reported that ChatGPT is very useful in academic writing [16]. Similarly, a recent study by George et al found that 43.4% of the respondents showed positive attitudes and believe that responses from ChatGPT are reliable and accurate, and that ChatGPT could be used to support academic activities without violating ethical concerns [22]. These findings highlight the necessity of educating researchers, academicians, and other healthcare workers on use of AI tools. Given the current extensive use of AI chatbots in all fields of research, including healthcare, adequate education and knowledge may help them to use ChatGPT by adhering to their ethical norms, which is extremely important.
In the present study, 61.7% of respondents believed that language models like ChatGPT can boost productivity, and 39.8% agreed that ChatGPT can have a positive impact on education. These findings support the results of earlier studies showing that medical students were significantly more likely to use ChatGPT for academic writing than other students. In addition, senior students exhibited the highest likelihood of academic ChatGPT use than juniors [22]. The literature shows that ChatGPT expedites academic writing and completes students’ assignments by establishing a mechanism that will allow students to ask questions and receive prompt answers regarding assignments or concepts [23,24]. This increases involvement and frees up more time to concentrate on providing high-quality instruction [23,24].
In this study, 43.6% of the academicians and researchers used ChatGPT for problem solving, while 24.3% of them used it for information gathering, and 16.6% revealed that ChatGPT is helpful for content generation. However, some of the present results disagree with those of previous studies. For instance, Abdelhafiz revealed that 11.5% of had used ChatGPT in their work for rephrasing paragraphs and finding references, for data analysis, and for writing parts of academic articles [16]. Similarly, a study among healthcare workers revealed that ChatGPT was helpful for a variety of tasks related to healthcare, including making medical decisions, supporting patients and their families, evaluating the medical literature (48.5%), and helping with medical research (65.9%). Furthermore, 76.7% of healthcare workers agreed that ChatGPT would benefit healthcare systems in the future [10]. These findings suggested that use of ChatGPT could potentially reduce their cognitive abilities [18]. Previous research has shown a few possible major drawbacks of ChatGPT in healthcare education and research, which include moral concerns, intellectual property problems, lack of novelty, erroneous content, lack of knowledge base, and erroneous citations, and these findings suggested that using ChatGPT could potentially reduce users’ cognitive abilities [16,25].
The present study has some limitations. First, the use of a self-administered online questionnaire could have introduced social desirability bias into the results. Second, the data were limited to a single university and a single region of Saudi Arabia, making it unrepresentative of other academic institutions both nationally and worldwide, so it is not applicable internationally. Finally, one of the possible limitations of the study’s sample may be the convenience sampling approach. It is recommended that this study should be repeated with larger sample sizes from other regions in Saudi Arabia to establish higher-quality evidence.
Conclusions
The growing use of ChatGPT in scholarly research offers a chance to promote the ethical and responsible use of artificial intelligence. The creation of sophisticated text analysis techniques for detecting misleading or deceptive content should be the top priority of academic research, as should the creation of courses that teach academics how to use ChatGPT and other AI technologies appropriately. These kinds of initiatives may ensure the benefits of AI augmentation without compromising ethics. For ChatGPT and other AI tools to be implemented in academic and research settings successfully, ensuring the trustworthiness and dependability of ChatGPT and other AI tools is essential. Future studies ought to concentrate on assessing ChatGPT’s clinical results and comparing its effectiveness to those of other chatbots and AI tools.
Tables
Table 1. Demographic and work characteristics of academicians and research professionals. Table 2. Computer skills, familiarity, comfort, and perceptions regarding ChatGPT among academicians and research professionals. Table 3. Awareness of ChatGPT among academicians and research professionals. Table 4. Participants beliefs about the use of the ChatGPT. Table 5. Perceived obstacles among academicians and researchers to using ChatGPT in healthcare practice. Table 6. Comparison between academicians’ and researchers’ demographic and academic rank with respect to the familiarity with ChatGPT. Table 7. Multivariate linear logistic regression analysis of demographic variables of academicians and researchers and their use of ChatGPT.References
1. King MR, ChatGPTA conversation on artificial intelligence, chatbots, and plagiarism in higher education: Cell Mol Bioeng, 2023; 16(1); 1-2
2. Liebrenz M, Schleifer R, Buadze A, Generating scholarly content with ChatGPT: Ethical challenges for medical publishing: Lancet Digital Health, 2023; 5(3); e105–6
3. McGee RW, Who were the 10 best and 10 worst US presidents?: The opinion of Chat GPT (artificial intelligence) February 23, 2023
4. Wu C, Yin S, Qi W, Visual chatbot: Talking, drawing and editing with visual foundation models: arXiv, 2023; 2303; 04671
5. Ali MJ, Djalilian A, Readership awareness series – paper 4: Chatbots and chatgpt-ethical considerations in scientific publications: Semin Ophthalmol, 2023; 38(5); 403-4
6. Dwivedi YK, Kshetri N, Hughes L, “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy: International Journal of Information Management, 2023; 71; 102642
7. Hu K, ChatGPT sets record for fastest-growing user base-analyst note: Reuters, 2023; 12; 2023
8. : GPT Evolution https://medium.com/the-techlife/evolution-of-openais-gpt-models-8148e6214ee7
9. Dave T, Athaluri SA, Singh S, ChatGPT in medicine: An overview of its applications, advantages, limitations, future prospects, and ethical considerations: Front Artif Intell, 2023; 6; 1169595
10. Temsah MH, Aljamaan F, Malki KH, ChatGPT and the future of digital health: A study on healthcare workers’ perceptions and expectations: Healthcare, 2023; 13; 1812
11. REUTERS: ChatGPT fever spreads to US workplace, sounding alarm for some Available at https://www.reuters.com/technology/chatgpt-fever-spreads-us-workplace-sounding-alarm-some-2023-08-11/
12. , OpenAI: ChatGPT: Optimizing language models for dialogue November 30, 2022 [Archived from the original on November 30, 2022. Retrieved December 5, 2022]
13. Lakshmanan Lak, Why large language models like ChatGPT are bullshit artists: becominghuman.ai December 16, 2022 [Archived from the original on December 17, 2022. Retrieved January 15, 2023]
14. , What can ChatGPT maker’s new AI model GPT-4 do?: ABC News March 15, 2023 [Archived from the original on March 20, 2023. Retrieved March 20, 2023]
15. HAI: Artificial Intelligence Index Report 2023 Available online at https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index-Report_2023.pdf
16. Abdelhafiz AS, Ali A, Maaly AM, Knowledge, perceptions and attitude of researchers towards using ChatGPT in research: J Med Syst, 2024; 48(1); 26
17. Temsah MH, Aljamaan F, Malki KH, ChatGPT and the future of digital health: A study on healthcare workers’ perceptions and expectations: Healthcare, 2023; 11; 1812
18. Bodani N, Lal A, Maqsood A, Knowledge, attitude, and practices of general population toward utilizing ChatGPT: A cross-sectional study: SAGE Open, 2023; 13(4); 21582440231211079
19. Bin-Nashwan SA, Sadallah M, Bouteraa M, Use of ChatGPT in academia: Academic integrity hangs in the balance: Technology in Society, 2023; 75; 102370
20. Kamoun F, El Ayeb W, Jabri I, Exploring students’ and faculty’s knowledge, attitudes, and perceptions towards ChatGPT: A cross-sectional empirical study: Journal of Information Technology Education: Research, 2024; 23; 004
21. Rahman MS, Sabbir MM, Zhang J, Examining students’ intention to use ChatGPT: Does trust matter?: Australasian Journal of Educational Technology, 2022; 51-71
22. George Pallivathukal R, Kyaw Soe HH, ChatGPT for academic purposes: Survey among undergraduate healthcare students in Malaysia: Cureus, 2024; 16(1); e53032
23. Fauzi F, Tuhuteru L, Sampe F, Analysing the role of ChatGPT in improving student productivity in higher education: Journal on Education, 2023; 5(4); 14886-91
24. Gill SS, Xu M, Patros P, Transformative effects of ChatGPT on modern education: Emerging era of AI Chatbots: Internet of Things and Cyber-Physical Systems, 2024; 4; 19-23
25. Sallam M, ChatGPT utility in healthcare education, research, and practice: Systematic review on the promising perspectives and valid concerns: Healthcare (Basel), 2023; 11(6); 887
Tables
In Press
Clinical Research
Impact of Manual Sustained Inflation vs Stepwise PEEP on Pulmonary and Cerebral Outcomes in Carotid Endarte...Med Sci Monit In Press; DOI: 10.12659/MSM.944936
Clinical Research
Predicting Vaginal Delivery Success: Role of Intrapartum Transperineal Ultrasound Angle of Descent at a Sin...Med Sci Monit In Press; DOI: 10.12659/MSM.945458
Review article
Comprehensive Analysis of UBE-Related Complications: Prevention and Management Strategies from 4685 PatientsMed Sci Monit In Press; DOI: 10.12659/MSM.944018
Clinical Research
Predicting Neonatal Hypoglycemia Using AI Neural Networks in Infants from Mothers with Gestational Diabetes...Med Sci Monit In Press; DOI: 10.12659/MSM.944513
Most Viewed Current Articles
17 Jan 2024 : Review article 6,056,766
Vaccination Guidelines for Pregnant Women: Addressing COVID-19 and the Omicron VariantDOI :10.12659/MSM.942799
Med Sci Monit 2024; 30:e942799
14 Dec 2022 : Clinical Research 1,848,323
Prevalence and Variability of Allergen-Specific Immunoglobulin E in Patients with Elevated Tryptase LevelsDOI :10.12659/MSM.937990
Med Sci Monit 2022; 28:e937990
16 May 2023 : Clinical Research 693,672
Electrophysiological Testing for an Auditory Processing Disorder and Reading Performance in 54 School Stude...DOI :10.12659/MSM.940387
Med Sci Monit 2023; 29:e940387
07 Jan 2022 : Meta-Analysis 257,970
Efficacy and Safety of Light Therapy as a Home Treatment for Motor and Non-Motor Symptoms of Parkinson Dise...DOI :10.12659/MSM.935074
Med Sci Monit 2022; 28:e935074