Otwarty dostęp

Bridging the Divide: An Empirical Investigation of Artificial Intelligence and Generative Artificial Intelligence Integration Across Genders, Disciplines and Academic Roles

, ,  oraz   
13 lis 2024

Zacytuj
Pobierz okładkę

Introduction

The integration of artificial intelligence (AI) into academia has evolved from its emerging stages focused on computational and logical tasks to its current state where Generative AI (GenAI) technologies (Banh & Strobel, 2023) are poised to transform educational and research methodologies. This evolution, marked by seminal works and technological milestones (Jordan & Mitchell, 2015; Turing, 2009; Vaswani et al., 2017), highlights the transition from theoretical AI and GenAI applications to practical, impactful uses in diverse academic fields.

The potential transformation of academia through AI and GenAI is profound and multifaceted. These technologies offer opportunities for personalised learning, where educational content can be tailored to the needs and pace of individual students, potentially revolutionising teaching methodologies and learning outcomes (Chiu, 2023; Mao et al., 2024). In research, GenAI tools can automate literature reviews, data analysis and even hypothesis generation, significantly accelerating the pace of discovery and allowing researchers to tackle more complex questions. Additionally, GenAI can facilitate interdisciplinary collaboration by translating complex concepts across domains, breaking down silos between fields (Dwivedi et al., 2023; Ooi et al., 2023). However, for this transformation to be fully realised, academia must navigate the challenges of integrating GenAI ethically and effectively, ensuring that these tools augment rather than undermine the foundational values of critical thinking, creativity and intellectual integrity that underpin scholarly work.

Incorporating recent desk research into the literature review, this study examines the roles of AI-driven personalisation in learning (Dimla et al., 2024), the support AI offers to educational administration (Marengo et al., 2024) and the potential biases within AI algorithms (Idowu, 2024). This comprehensive context provides a solid foundation for understanding the implications of AI and GenAI in academia.

In the context of distance learning and e-learning, AI and GenAI technologies can enhance accessibility, personalise learning experiences and improve the efficiency of educational processes. By providing adaptive learning environments that respond to individual student needs, these technologies help bridge gaps in traditional educational settings, making education more inclusive and effective. AI-driven tools can support realtime feedback and assessment, ensuring that students receive timely and relevant support regardless of their physical location. This capability is particularly crucial for distance learners, who often face challenges in accessing resources and receiving personalised attention. Furthermore, AI and GenAI can automate administrative tasks, freeing educators to focus more on instructional design and student interaction, thereby improving overall educational outcomes.

Leading universities worldwide have already outlined guidelines for integrating GenAI into teaching and research, emphasising a multifaceted approach to maximise benefits while mitigating risks. Harvard University, for instance, emphasises the importance of ensuring data privacy and security, implementing review protocols for AI-generated content, adapting academic integrity policies, increasing awareness of AI-enabled security threats and establishing clear reporting mechanisms. Such comprehensive policies aim to leverage GenAI’s transformative potential, enhancing educational and research outcomes while upholding ethical standards and promoting a culture of responsibility and innovation within the academic ecosystem (Dwivedi et al., 2023; Feuerriegel et al., 2024).

To effectively harness the full power of AI and GenAI within academia, it is important to investigate the level of knowledge and attitudes of faculty members and students towards these technologies. Understanding their perspectives on the benefits and risks associated with these technologies is essential. This foundational knowledge paves the way for developing strategies that leverage AI’s benefits while addressing its challenges, empowering the academic community to engage with this digital evolution confidently and competently.

Emerging research on the familiarity, perceptions and utilisation of AI, particularly GenAI, among students and university faculty, showcases a collective endeavour to understand the multifaceted implications of AI integration in educational settings. This body of work spans diverse academic landscapes, uncovering commonalities and divergences in the engagement with AI technologies across disciplines, regions and educational contexts.

At the heart of these investigations is an evident concern for the pedagogical integration of AI tools. Kohnke et al. (2023) and Zou et al. (2020) highlight the academic community’s cautious optimism towards AI’s potential to enhance learning experiences. English language instructors in Hong Kong expressed a need for professional development to bridge digital competencies and pedagogical knowledge gaps, mirroring concerns around the effective use of AI for language learning in China, where students noted the benefits of AI in developing speaking skills, albeit hampered by a lack of personalisation and feedback.

Parallel to these findings, Kurtz et al. (2024) uncovered a substantial reliance on GenAI applications among Israeli students for generating ideas and clarifying study content, with ChatGPT emerging as a prevalent tool. This usage underscores GenAI’s role as an academic assistant, though intertwined with ethical concerns regarding its potential to bypass traditional learning processes—a sentiment echoed in the examination of ChatGPT’s usage among students through TikTok analyses by Haensch et al. (2023), who also underscored the need for ethical guidance in AI’s academic application.

Further exploration into AI’s educational impact reveals regional variances in perception and utilisation. Ravi Kumar and Raman (2022) in India and Bonsu and Baffour-Koduah (2023) in Ghana illustrate a generally positive reception towards AI in higher education, with specific reservations about AI’s role as a faculty substitute and the importance of prior exposure to AI technologies in shaping student perceptions. These studies collectively underscore a nuanced understanding of AI’s potential to support administrative and educational processes while highlighting apprehensions regarding the depersonalisation of learning and the ethical implications of AI-driven content generation.

In a focused examination of AI’s academic reliability, Dahlkemper et al. (2023) investigated physics students’ perceptions in Germany, revealing a discrepancy between the perceived linguistic quality of ChatGPT responses and their scientific accuracy. This critical evaluation of content validity resonates across studies, emphasising the necessity for discernment in the application of AI tools within specific academic disciplines.

Collectively, these studies (Bonsu & Baffour-Koduah, 2023; Dahlkemper et al., 2023; Haensch et al., 2023; Kohnke et al., 2023; Kurtz et al., 2024; Ravi Kumar & Raman, 2022; Zou et al., 2020) underscore a global academic engagement with AI, driven by a dual pursuit of leveraging AI’s educational potential while navigating its ethical, pedagogical and epistemological challenges. The findings highlight a consensus on the need for comprehensive strategies that include tailored professional development, ethical guidelines for AI usage and a critical assessment of AI-generated content, ensuring that AI integration enhances rather than undermines the educational process.

Based on the existing literature, this study aims to investigate the knowledge, attitudes and utilisation of AI and GenAI among students and faculty members in Israeli institutions of higher education. Moreover, it seeks to explore how variables such as gender and academic discipline influence these factors. By examining the familiarity, perceptions and actual implementation of AI technologies in teaching and research across various fields and demographics, this research endeavours to provide an understanding of the current state of AI and GenAI adoption in the Israeli higher education landscape. The findings may shed light on potential disparities, challenges and opportunities associated with the integration of AI and GenAI in academic settings, ultimately informing strategies to foster its effective and equitable implementation.

Building upon existing research on the integration of AI and GenAI in academia, our study offers a novel contribution by delving into sociodemographic variances and the multifaceted use and understanding of GenAI across various academic domains. We contrast its applications within social sciences, engineering and humanities and analyse perspectives from both students and lecturers. This endeavour not only highlights the intricate manners in which AI and GenAI is being adopted in educational and research activities but also pinpoints the disparities, challenges and opportunities emerging from its use.

By focusing on the Israeli higher education landscape, our investigation extends the dialogue beyond regional confines to contribute to the global discourse on GenAI in academia. We explore how factors like gender and academic discipline shape the engagement with AI and GenAI, aiming to illuminate the broader implications of these technologies for educational practices worldwide. Our study enhances the literature by providing a comprehensive understanding of AI and GenAI’s adoption across diverse academic settings, thus offering insights into effective, equitable and tailored integration strategies.

The significance of our research lies in its capacity to inform policy-making and educational strategy development on a global scale. By identifying the nuanced challenges and opportunities associated with AI and GenAI’s implementation, our findings offer valuable guidance for academic stakeholders aiming to navigate the complexities of these technologies.

Methods
Research design and procedure

A mixed-methods (Padgett, 2012; Saks & Allsop, 2013) cross-sectional study was conducted utilising an online questionnaire as the primary research stage, which included 704 participants. This was followed by a second stage comprising semi-structured, in-depth interviews with 12 individuals aimed at validating and deepening the findings obtained from the initial phase. The survey was intended for lecturers and students who registered to participate in the AI webinar that took place virtually at the University of Haifa on December 2023. To ensure the validity of the questionnaire, it was pilot-tested with four faculty members from different disciplines and four students before being distributed to the larger sample.

The survey has three parts: the first part with seven closed questions and the second part with four open questions. All the questions are based on the following topics: familiarity with AI and GenAI, actual use and attitudes regarding the integration of AI and GenAI in academia. The third part includes five sociodemographic questions. Since the questions in the questionnaire were not defined as mandatory, some respondents skipped some of them; therefore in the analyses referring to various questions, sample sizes smaller than the total sample size are sometimes obtained. In addition, semistructured in-depth interviews, which were carried out with leading professionals in the field of AI and GenAI in academia, were conducted using a protocol that includes five questions on the following topics: attitudes on the effects of AI on teaching and research in academia.

The study was approved by the Ethics Committee of the Faculty of Social Welfare and Health Sciences at the University of Haifa (confirmation number 024/24).

Sampling and data collection

An AI and GenAI webinar was conducted at the University of Haifa on December 2023, intended for students and lecturers in Israel and required advance registration. Our survey was distributed 1 month before via email to all those who registered for the webinar. Before distributing the survey, a small pilot study was conducted to test the questions (Table 1).

Sociodemographic characteristics of the participants

Variable Sub-variable Frequency (%)
Gender Male 238 (35)
Female 448 (65)
Total 686 (100)
Academic status Student 471 (69)
Lecturer 197 (29)
Other 16 (2)
Total 684 (100)
Faculty Humanities 89 (19)
Social sciences 167 (36)
Social welfare and health sciences 94 (21)
Engineering and exact sciences 25 (5)
Natural and environmental sciences 29 (6)
Law 21 (5)
Other or multidisciplinary 34 (7)
Total 459 (100)

Additionally, in-depth qualitative interviews were conducted with 12 leading lecturers in the field of AI in academia (n = 10) and industry working with the academy (n = 2) in Israel, including three women and nine men. The areas of specialisation of the interviewees include Chemical Engineering and Technology, Organisational Psychology, Educational Technology, Mental Health, Research and Development in Hi-tech, Political Science, Learning Technologies, Biostatistics, Philosophy, and Computer Science.

The professional sample was obtained through a convenience sample, where researchers reached out to professionals via email and invited them to participate in interviews for the research.

Research tool: survey structure

The variables examined in the study were the degree of familiarity with AI and GenAI tools, which was measured by the question ‘How well do you know AI and GenAI tools?’, and the perspective on whether AI and GenAI studies should be integrated into academic staff training, which was measured by the question ‘To what extent do you think AI and GenAI studies should be integrated into the training process of the academic staff?’ These two variables were condensed from a 5-point scale to a 3-point scale. Another variable is the actual usage of AI and GenAI tools measured by the question ‘Have you ever used tools or applications powered by AI and GenAI for teaching/assessment/research purposes?’ and measured by three possible answers: ‘Yes, I have used’, ‘I have not used but intend to use soon’ or ‘I have not used and do not intend to use’.

Additionally, the variables of perceived benefits of AI and GenAI in teaching and research, which were measured using two questions, ‘Please indicate three main advantages of using AI and GenAI in teaching and assessment processes?’ and ‘Please name three major advantages of using AI and GenAI in research?’, as well as risks of AI and GenAI in academia measured by the question ‘What are the potential risks or concerns associated with the use of AI and GenAI in academia?’ were examined. In responding to these questions, participants were asked to select three benefits and three risks from a given list of possible benefits and risks. The analysis and utilisation of these variables were conducted such that 100% of the responses did not equate to 100% of the participants who answered, as each participant provided three responses within each question. All the variables were examined in relation to sociodemographic variables, including gender, academic role (lecturer or student) and academic discipline.

Analysis

The quantitative section of the study included conducting descriptive statistics and chi-square tests to examine the significance of differences between the various groups. The qualitative part included content analysis (Saldana, 2009). Themes related to the topics examined in the study, such as the benefits and potential risks of AI, were selected to check whether there is a correlation between the participants who answered the survey and the experts who were interviewed, thus generating greater validity for the results of the study (Andrew & Halcomb, 2009).

Results
Familiarity and practical use of AI and GenAI tools

Out of 701 participants who answered the question regarding their level of familiarity with AI and GenAI tools, 49% (n = 344) were minimally familiar with AI-based tools, 34% (n = 235) reported moderate familiarity and only 17% demonstrated a high level of familiarity with AI tools. Out of 680 participants who answered the question regarding their actual use of AI and GenAI tools, 44% (n = 301) reported having used an AI and GenAI tool, while 56% reported having never used any, including around 16.3% who also do not intend to use them in the future. When asked about the factors preventing them from using AI and GenAI tools, out of 680 participants who answered the question, 61.3% of participants (n = 138) responded that the main barrier is a lack of knowledge.

Following the survey results, interviews were conducted with lecturers in the field of AI in academia and industry experts working with academy to gain further insights. Most interviewees identified the lack of knowledge as a key barrier preventing lecturers and students from using AI and GenAI. They mentioned that many scholars still have limited exposure to AI and GenAI tools and underestimate their potential.

‘Today, I see many scholars who are not exposed and do not understand the power of AI… knowledge is one of the barriers that can be problematic. This also affects the strategy of the university or college. Because if the managers or the people who set the agenda do not have a deep knowledge of what is happening on the ground, It’s more of a perception gap’ (Chemical Engineering and Technology, Industry).

The interviewees pointed to the role academic institutions have played in perpetuating the lack of knowledge about AI. They noted that universities and colleges have failed to make AI tools accessible to lecturers and students, despite AI technologies existing for many years. According to the interviewees, the traditionally conservative organisational strategies and cultures within academic systems have prevented the integration of AI in research and learning activities on campuses until recently.

‘Part of the barrier is our very rigid organizational culture and tradition’ (Organizational Psychology, Academy).

While much of higher education may have lagged in AI adoption due to institutional inertia, the interviewees observed tentative signs of change. However, most agreed that mainstream integration of AI in academia is still in its early stages and will require ongoing efforts to disrupt traditionally slow-moving institutional cultures, integrate AI tools and determine rules for ethical use. They also emphasised the role of academic institutions in training and guiding lecturers and students in using AI tools. Some interviewees acknowledged that certain institutions have begun utilising and teaching AI in ethical and intelligent ways.

‘.We have an AI integration procedure, we have field notes where we asked students to document the use process, we have internships. There is integration of artificial intelligence in every course in our faculty’ (Educational Technology, Academy).

Differences between women and men in their level of familiarity and use of AI and GenAI tools

An examination of the differences between women and men in their familiarity with AI and GenAI tools revealed a significant disparity (Table 2).

Knowledge and actual use of AI-based tools in female and male

Male (%) Female (%) χ2(2) P-value
Familiar with AI and GenAI-based tools Minimally 95 ((40 241 (54) 25.11 <0.0001
Moderately 78 ((33 148 ((33
Highly 64 (27) 58 ((13
Total 237 (100) 447 (100)
Actual use of AI and GenAI for teaching/research purposes Used 105 (46) 191 (44) 5.41 0.0669
Have not used but intend to use 81 (35) 186 (43)
Have not used and do not intend to use 45 (20) 60 (14)
Total 231 (100) 437 (100)

AI, artificial intelligence; GenAI, Generative AI.

Men reported a higher level of familiarity with AI and GenAI (27%) than women (13%) (×2 = 25.11, P < 0.0001). However, in testing the actual use of AI and GenAI tools, no significant difference was found between women and men. Among the men who said that they are highly familiar with AI-based tools (27%), there are 8% who have never actually used such tools. Among the women who reported that they are highly familiar with tools (13%), there are only about 3% who have never used AI and GenAI-based tools.

Differences between lecturers and students in their level of familiarity and use of AI and GenAI tools

An examination of the differences between lecturers and students in their familiarity with AI and GenAI tools (Table 3) revealed that lecturers demonstrated a higher degree of familiarity (77%) than students (60%) (X2 = 25.11, P < 0.0001). Furthermore, an assessment of actual usage also indicated a significant disparity, with lecturers using AI tools more (56%) than the students (39%) (X2 = 18.29, P < 0.0001).

Knowledge and actual use of AI and GenAI-based tools in lecturers and students

Lecturers (%) Students (%) χ2(2) P-value
AI and GenAI studies should be integrated into academic staff Minimally agree 6 (3) 33 (7) 25.11 <0.0001
Moderately agree 39 (20) 153 (33)
Highly agree 152 (77) 279 (60)
Total 197 (100) 465 (100)
Actual use of AI and GenAI for teaching/research purposes Used 109 (56) 178 (39) 18.29 <0.0001
Have not used but intend to use 66 (34) 191 (42)
Have not used and do not intend to use 20 (10) 85 (19)
Total 195 (100) 454 (100)

AI, artificial intelligence; GenAI, Generative AI.

While lecturers demonstrated greater baseline familiarity, interviews uncovered sociocultural and institutional factors influencing integration. Interviewees also referred to perceptual divides and diversity of viewpoints across careers and cohorts as a reason for unfamiliarity with AI tools:

‘First of all, because there is human diversity and there are also gaps between generations, there are people who are as if on the verge of retirement and they are now starting to cross the subject of artificial intelligence and flow with this thing and there are people at the beginning of their academic career who won’t use artificial intelligence tools’ (Mental Health, Academy).

The interviews drew attention to familiarity gaps and disagreement among lecturers regarding using AI tools. These disagreements may stem from ‘human diversity and gaps between generations’. Some interviewees noted resistance from some senior lecturers unwilling to adopt new technologies. While younger lecturers are more open to learning,

‘There may be older researchers, professors who do not want to take part in this, who do not believe in it and this is not only in computer science, I’m talking about all fields of knowledge and fields of research in all faculties, not just life sciences or exact sciences or computer science, all of them’ (Research and Development in Hi-tech, Industry).

Among students, the interviewees referred to the impact of familiarity with AI and GenAI. AI and GenAI poses opportunities but also equitable risks. On the one hand, the interviewees suggested that AI and GenAI may reduce gaps between students and help students overcome various challenges such as language challenges and learning difficulties. For instance, Arab students in Israel, whose mother tongue is Arabic but pursue education where Hebrew is the language of instruction, must compete regarding English proficiency as well.

‘So here is something that can make a huge difference for Arab students…to allow students at any moment to come to the university and overcome failures’. (Political Science, Academy)

However, others warned that better students ‘will get better through AI’ while low-achieving students ‘will fall back’ without sufficient guidance.

‘If we don’t become part of the event, challenges will develop for us, the strong students will become stronger through artificial intelligence, and the weak students will go back, and we need to pay attention to this’. (Learning Technologies, Academy).

Differences between academic disciplines in the level of familiarity with AI and GenAI tools, the actual use of these tools and attitudes regarding the need to integrate AI and GenAI studies into the training of the academic staff.

While comparing different academic disciplines, participants from engineering and exact sciences (n = 25) reported the highest familiarity (28%) with AI and GenAI-based tools, while humanities participants (n = 89) showed lower levels of familiarity (61%) than other fields (X2(12) = 21.34, P = 0.0456). However, there was no significant difference between disciplines in selfreported actual use of AI and GenAI tools (X2 = 13.59, P = 0.3280). When asked about the integration of AI and GenAI studies into academic faculty training, those who deemed it highly necessary were primarily from social sciences (n = 165) with 74%, followed by professions in welfare and health sciences (n = 94) with 71%.

The interviews suggested that AI and GenAI could be applied more generally where it enhances human capabilities. However, human involvement remains essential for roles cantered around inherent human strengths that may be difficult for technology to replicate fully like medicine.

‘Certainly, in the subjects of the humanities, social sciences, etc. I don’t see how the AI gets any kind of priority over the human spirit’. (Mental Health, Academy).

Advantages of using AI and GenAI

When the participants were asked about the advantages of using AI and GenAI, they were asked to mark the three most significant advantages from a given list. The three main advantages chosen by participants (n = 666) in the context of using AI in teaching are streamlining administrative tasks (44%), providing personalised instruction and assessment (43%) and systematically tracking learning process outcomes (40%). The three main advantages chosen by participants (n = 670) in the context of using AI and GenAI in research are support in data analysis and interpretation (54%), assistance in literature writing (49%) and prediction models based on existing data to guide future research (46%).

Similar to the quantitative results, the interviewees referred to the AI and GenAI integration prospects across three pivotal dimensions in the academy. AI and GenAI could judiciously support administrative and bureaucratic management by streamlining workflows and expenses and bolstering oversight through data-driven insights.

‘Artificial intelligence can assist in administrative and bureaucratic management, at the budgetary level and in control’ (Learning Technologies, Academy).

In the domain of research, the interviewees suggested that AI and GenAI demonstrate considerable promise to support core functions. It can aid analysis, enabling automated extraction and synthesis of patterns across large data sets. Brainstorming capabilities may likewise be augmented through AI and GenAI-driven idea generation and stimulation. Significant time savings could also be realised through tasks such as automated data processing, knowledge base curation and acceleration of routine workflows.

‘It affected the academy also at the research level because writing is a significant part of the research. Advances in research mean that today you can do a brainstorming with these tools. And of course, students also use it…’ (Biostatistics, Academy).

Pedagogically, the interviewees said that AI and GenAI shows promise in developing personalised education models. Adaptive learning tools, from customised curricula to immersive multimedia, may engender richer learning experiences. Assessment and feedback cycles could also be refined.

‘In teaching, the faculty members can do a great many things with it, such as develop a syllabus, develop lesson plans, develop questions for discussion, pictures, illustrations, and more and more’ (Learning Technologies, Academy).

In examining the differences between women and men in the perception of the benefits of AI in teaching (Table 4), it was found that men (35%) see an advantage of AI and GenAI as a supporting tool in making decisions more than women (23%) (X2 = 9.99, P = 0.0016). In examining the differences between women and men in the perception of the advantages of AI and GenAI in research, it was found that men (31%) see an advantage of AI and GenAI in analysing written language using natural language models more than women (24%).

Differences in the perception of the benefits of AI and GenAI in teaching and research in academia between men and women and between lecturers and students

Teaching Male Female χ2(1) P-value
Decision support tool Frequency (%) 79 (35) 98 (2) 9.99 0.0016
Total 226 (100) 424 (100)
Research
Analysing written language using natural language models Frequency (%) 70 (31) 103 (24) 4.09 0.43
Total 227 (100) 430 (100)
Lecturers Students χ2(1) P-value
Teaching
Improving student engagement and persistence Frequency (%) 58 (32) 108 (24) 3.92 0.0478
Systematic tracking of learning process outcomes Frequency (%) 38 (21) 216 (48) 39.37 <0.0001
Using AI and GenAI as a consultant for lecturers Frequency (%) 86 (48) 103 (23) 38.5 <0.0001
Enhancing accessibility and inclusion Frequency (%) 36 (20) 144 (32) 9.48 0.0021
Total 180 (100) 449 (100)
Research
Assistance in literature writing Frequency (%) 108 (58) 204 (45) 7.66 0.0056
Support in data analysis and interpretation Frequency (%) 84 (45) 258 (57) 8.35 0.0039
Prediction models based on existing data to guide future research Frequency (%) 48 (26) 245 (54) 40.22 <0.0001
Connecting researchers from similar fields and promoting collaborations Frequency (%) 33 (18) 149 (33) 14.48 0.0001
Total 186 (100) 453 (100)

AI, artificial intelligence; GenAI, Generative AI.

Remark: The frequency sum is greater than the total because the participants were required to choose up to three answers for the same question.

In examining the differences between lecturers and students in the perception of the benefits of AI and GenAI in teaching (Table 4), it was found that lecturers see improving student engagement and persistence (32%) and using AI and GenAI as a consultant for lecturers (48%) more advantageous than students (24%, 23%) (X2 = 3.92, P = 0.0478; X2 = 38.5, P = < 0.0001), and students see systematic tracking of learning process outcomes (48%) and enhancing accessibility and inclusion (32%) more advantageous than lecturers (21%, 20%) (X2 = 39.37, P < 0.0001; X2 = 9.48, P = 0.0021).

An examination of the differences between lecturers and students in their perception of the benefits of AI and GenAI in research revealed that lecturers (58%) perceived assistance in literature writing as a more significant advantage than students (45%) (X2 = 7.66, P = 0.0056). However, students viewed support in data analysis and interpretation (57%), prediction models based on existing data to guide future research (54%) and connecting researchers from similar fields to promote collaborations (33%) as more advantageous than lecturers (45%, 26%, 18%) (X2 = 8.35, P = 0.0039; X2 = 40.22, P = < 0.0001; X2 = 14.48, P = 0.0001).

Potential risks of using AI and GenAI

The most significant potential risks of AI and GenAI use in academies (Table 5) selected by the participants (n = 684) include the potential overreliance on technology (67%), bias in algorithms or data (55.6%), transparency issues in AI and GenAI processes (50.3%) and concerns regarding privacy and data security (49.9%).

Differences in the perception of the risks of AI in academies between lecturers and students

Lecturers Students χ2(1) P-value
Bias in algorithms or data Frequency (%) 119 (62) 244 (53) 4.50 0.0339
Compromising work efficiency Frequency (%) 12 (6) 55 (12) 4.75 0.0293
Transparency issues in AI and GenAI processes Frequency (%) 123 (64) 207 (45) 19.02 <0.0001
A threat to existing academic Frequency (%) 19 (10) 92 (20) 8.68 0.0032
positions Total 192* (100) 461* (100)

AI, artificial intelligence; GenAI, Generative AI.

*The frequency sum is greater than the total because the participants were required to choose up to three answers for the same question.

In examining the differences between lecturers and students in the perception of the risks of AI and GenAI in the academy, it was found that the risks that concern the lecturers more than the students are bias in algorithms or data (62% vs. 53%) and transparency issues in AI and GenAI processes (64% vs. 45%), and among the students the risks that concern them more than the lecturers are compromising work efficiency (12% vs. 6%) and a threat to existing academic positions (20% vs. 10%).

Similarly, almost the same potential risks emerged from the interviews. However, plagiarism through verbatim copying was also suggested by most of the interviewees. Superficial or shallow work was another concern if tools were substituted for deeper human reasoning. Over-reliance on AI and GenAI assistance risks negatively impacting independent and critical thought over the long term. Questions were also raised around information reliability where AI and GenAI are involved in knowledge generation or curation. Without proper oversight and context, some warned data sets or conclusions could reflect unintended biases. Most crucially, there existed palpable worries regarding an overdependence that displaces inherent human functions.

Also, the interviewees referred to the disadvantages and risks of using AI. These dangers include copying, superficial works, damage to thinking, over-reliance on AI and the reliability of information.

‘The challenges of artificial intelligence include: mistakes, biases (in texts it is very difficult to notice), translation problems (the issue of Hebrew creates a lot of difficulties), the concept of a black box (sources)—how is the intelligence built and work? What is it based on?’ (Learning Technologies, Academy).

The shift in the lecturer role

The interviews also shed light on the shift in the role of the lecturers and the students following the development of AI. Today, AI and GenAI can provide knowledge, so the lecturer’s role shifts from a knowledge provider to a facilitator of learning processes and awakening curiosity. All interviewees noted that in the past the lecturers focused on transmitting knowledge to students, but nowadays with information availability online and especially with AI’s development their role must change. AI and GenAI can easily provide information, so there is no need for lecturers to memorise and transmit knowledge. Instead, lecturers are tasked with guiding and overseeing learning and thinking processes. They must develop students’ curiosity and inquisitiveness, teaching not just technical tools but also about using them critically and creatively.

‘If we once thought we impart knowledge or tools to tackle complex tasks, it seems AI is capable of both. We’ll need to find ways to use AI as a teaching and research tool. But we’ll need to change what we teach—we won’t need to provide the same types of knowledge, and definitely won’t need to teach the same technical tools we once did’ (Organizational Psychology, Academy).

Classroom dynamics also may change, with lecturers no longer ‘knowledge agents’ and students not ‘passive receivers’. Lecturers must closely work with students and provide guidance beyond just sharing knowledge. They should provide creative freedom and experiential learning through projects.

‘Truly giving them full freedom to create because the model of me as the knowledgeable one and the passive student is over. We can now provide thinking approaches, foundations, and disciplines, and encourage tool use for self-development and a changing job market. Creativity is therefore key, not memorized information’ (Mental Health, Academy).

Accordingly, teaching methods will be redesigned to integrate AI and GenAI, potentially transforming classrooms into added-value learning spaces. In addition, the evaluation methods will also change and be adapted to the development of AI and GenAI.

‘Classrooms will be completely different. Lecturers won’t write on boards, it will be more virtual, advanced, and digitaL.Classrooms must provide added value beyond online access. With today’s AI, much is accessible. So what added value can universities provide?’ (Chemical Engineering and Technology, Industry).

Integrating AI and GenAI is highly important for academia to remain relevant and adapt to changing labour market needs. The interviewees noted that knowledge of using AI is essential to ensure graduates’ futures. As one quote noted, AI will not replace workers, but those who know how to use it will gain an advantage.

‘The sentence “AI will not replace anyone” has become a mantra, and those using AI will replace workers. Today, now, yesterday is the time to jump on the wagon and understand it, use it, and follow its development in order to not fall behind. Any university not allowing AI introduction through its gates today is doing wrong, not just for lecturers but for students who will graduate with knowledge and proficiency in AI from other institutions’ (Political Science, Academy).

Therefore, it is important to follow AI’s development and integrate it in both academia and all future fields while establishing ethical rules. AI is the future and those who are unaware of how to use it ‘will fall behind’.

Discussion

In recent years, the advent of AI and GenAI has marked a significant milestone in the evolution of technology, transcending its traditional industrial confines to make a pronounced entry into the public sphere, including the realm of academia. This transition, while reflective of a broader technological advancement trend, has been particularly noteworthy over the last year, showcasing AI’s growing accessibility and applicability within educational and research settings.

Our empirical research within the Israeli academic sector unveils a pronounced knowledge gap concerning AI and GenAI, reflecting a broader challenge within academia. Specifically, out of 71 participants surveyed, 49.1% reported minimal familiarity with AI and GenAI tools, while 33.5% indicated moderate familiarity. This reveals that a substantial majority lacks a comprehensive understanding and engagement with AI technologies. Additionally, 55.7% have never used AI and GenAI tools, with 61.3% of non-users identifying a lack of knowledge as the primary hindrance to their engagement with these technologies.

Knowledge gaps in AI and GenAI in academia

The knowledge gap in AI and GenAI AI within the academic sector, a phenomenon echoed globally, signifies a complex challenge transcending geographical boundaries. Studies by Kohnke et al. (2023) and Zou et al. (2020) have shed light on similar disparities in digital competencies and pedagogical knowledge among educators worldwide, reflecting a cautious optimism towards the potential of AI to enhance learning experiences. Yet, these studies also reveal significant obstacles, including a pervasive lack of knowledge and preparedness for effective AI and GenAI integration. This global sentiment is mirrored in our findings, where experts across various academic disciplines confirm that insufficient exposure to AI’s capabilities and applications severely hampers its adoption and utilisation. An expert from the field of Chemical Engineering and Technology poignantly highlighted:

‘Today, I see many scholars who are not exposed and do not understand the power of AI… knowledge is one of the barriers that can be problematic. This also affects the strategy of the university or college. Because if the managers or the people who set the agenda do not have a deep knowledge of what is happening on the ground, it’s more of a perception gap’.

This knowledge gap underscores a broader issue of unpreparedness within academic institutions for the seismic shifts brought about by the AI and GenAI revolution. For years, academia has been anchored in traditional frameworks of knowledge acquisition and dissemination, frameworks now challenged by the rapid advancements in AI and GenAI technology. These advancements, capable of redefining educational paradigms and research methodologies, have caught many institutions off guard, leaving them in a scramble to catch up. This situation is not merely indicative of a technological lag but highlights a profound challenge to the core methodologies of knowledge construction and dissemination within the academic realm.

Moreover, the hesitation or delay in embracing AI and GenAI reflects a deeper uncertainty about integrating these technologies in a way that respects and augments established educational values and practices. The disruptive potential of AI and GenAI raises pressing questions about the future of teaching, learning and research, necessitating a thorough revaluation of pedagogical approaches and the cultivation of new competencies that resonate with an increasingly digital landscape.

Addressing the identified knowledge gap is imperative, requiring a concerted effort from academic institutions to develop strategies that encompass education, training and policy development, all while emphasising the ethical and responsible use of AI and GenAI. Bridging this gap is not only about catching up with technological advancements but also about reimagining the processes of knowledge creation and dissemination in an era defined by digital innovation (Pedró et al., 2019). This approach calls for a systemic shift in academia, one that prepares institutions not just to adopt AI and GenAI technologies but to integrate them in ways that enrich and extend the foundational principles of academic enquiry.

Gender differences in familiarity and use of AI and GenAI tools

Further complicating the landscape of AI and GenAI adoption in academia are the gender disparities revealed by our study. Our study unveils significant gender disparities in the realm of AI and GenAI adoption within academia, marking a critical area of concern. It was found that men report a higher level of familiarity with AI tools (27%) than women (13%), a disparity that intriguingly does not translate into actual usage rates. This discrepancy hints at a potential overconfidence among men regarding their AI and GenAI competencies—a trend that resonates with broader societal and psychological observations. Research indicates that men are generally more prone to overestimating their abilities, whereas women tend to underestimate theirs, even in scenarios where actual skills do not significantly vary (Correll, 2001; Ehrlinger & Dunning, 2003; Huang, 2013; Pallier, 2003; Sáinz et al., 2020; Wang & Yu, 2023). This gendered divergence in self-assessment can significantly impact academic and professional trajectories, influencing the kinds of opportunities pursued and the level of support sought or provided. The presence of such disparities underscores an urgent need for academic institutions not only to recognise and acknowledge this issue but also to actively implement targeted interventions. The goal here is to cultivate an academic environment that champions accurate self-assessment and fosters confidence building across all gender spectrums. By addressing these ingrained biases, academia can move towards a more inclusive and equitable landscape, ensuring that all individuals, irrespective of gender, have equal access to AI and GenAI tools and technologies. This approach is critical for mitigating missed opportunities for learning and development, thereby enhancing the overall AI adoption and integration within educational settings. Aligning with existing literature on gender differences in self-assessment and confidence levels, our findings emphasise the necessity of comprehensive strategies aimed at rectifying these disparities. Through such efforts, academia can achieve a more balanced and fair engagement with AI and GenAI technologies, promoting a culture of continuous skill enhancement and accurate self-awareness across its diverse community (Kurtz et al., 2024; Malik et al., 2024; Mathew & Stefaniak, 2024).

Differences between lecturers and students in AI and GenAI familiarity and usage

The disparity in the familiarity and actual usage of AI and GenAI tools between lecturers and students highlights several critical dimensions of the academic integration of AI and GenAI. Lecturers exhibit a higher degree of familiarity (77%) and usage (56%) than students (60% familiar, 39% using), pointing to an inherent discrepancy within the academic ecosystem. We assume that this gap is not merely a function of exposure but is deeply influenced by sociocultural and institutional factors, as well as perceptual divides across different academic and career stages.

The interviews in our study underscore the significance of ‘human diversity and gaps between generations’ in understanding this disparity. Some senior lecturers exhibit resistance to adopting AI technologies, potentially due to a reluctance to pivot from established teaching and research methodologies. Conversely, younger lecturers and those at the beginning of their academic careers are more inclined to embrace AI, suggesting a generational divide in the willingness to integrate new technologies into academic practice.

For students, the hesitation to engage more deeply with AI tools could stem from a lack of clear guidance from their institutions. This uncertainty about how their use of AI is perceived or valued within the academic framework may deter them from leveraging these tools to their full potential. Specifically in Israel, external factors such as the postponement of the academic year due to the War in Gaza may further exacerbate this hesitancy, limiting students’ opportunities to explore and utilise AI and GenAI in their studies compared to lecturers who continue their research activities.

The differential impact of AI and GenAI on students highlights a dual-edged sword; while AI has the potential to level the playing field, offering support to overcome language barriers and learning difficulties—as noted for Arab students—it also risks widening the gap between high-achieving and lower-achieving students without adequate support and guidance.

An additional interpretation of this disparity might be related to the overall integration strategy of AI and GenAI within the academic curriculum. Lecturers, with their closer involvement in the development and execution of curriculum and research, may have more immediate opportunities and institutional support to incorporate AI and GenAI into their work. In contrast, students’ engagement with AI and GenAI may depend heavily on how these tools are introduced and integrated into their learning experiences by the faculty and administration (Kurtz et al., 2024). The lack of a cohesive strategy for AI and GenAI education and usage guidelines at the institutional level can leave students unsure about how to effectively utilise AI and GenAI resources, fearing potential academic repercussions or misalignment with educational objectives. Moreover, the broader context of societal and technological readiness plays a crucial role. In a rapidly evolving digital landscape, the pace at which institutions adapt to incorporate AI and GenAI into educational settings may lag behind technological advancements, leaving both lecturers and students navigating a complex and often unclear path towards effective integration.

In light of this, bridging the familiarity and usage gap between lecturers and students in AI and GenAI requires a multifaceted approach that addresses generational, cultural and institutional barriers. It necessitates clear guidelines, support systems and an inclusive strategy that recognises the diverse needs and challenges within the academic community, ensuring that all members can harness the benefits of AI and GenAI without exacerbating existing disparities.

Differences across academic disciplines in AI and GenAI familiarity and usage

The examination of the familiarity and usage of AI and GenAI tools across different academic disciplines unveils notable disparities that reflect the intrinsic characteristics and technological orientations of these fields. Engineering and exact science participants reported the highest familiarity with AI and GenAIbased tools, which is not surprising given the technical and computational nature of these disciplines. This contrast with the humanities, where participants showed lower levels of familiarity, underscores the traditional divide between Science, Technology, Engineering and Mathematics (STEM) fields and the humanities in terms of technological engagement and digital literacy. Despite these differences in familiarity, there was no significant disparity in the actual usage of AI and GenAI tools across disciplines, suggesting that while knowledge about AI and GenAI might be uneven, its application has permeated a broader range of academic fields. This universality of AI and GenAI tool usage, despite varying levels of familiarity, highlights the growing importance of AI across all areas of academic enquiry, not just those traditionally associated with technology.

The call for integrating AI and GenAI studies into academic staff training was strongest among participants from the social sciences and professions in the welfare and health sciences. This demand likely stems from a recognition of AI and GenAI’s potential to enhance research methodologies, data analysis and even pedagogical approaches within these fields. The social sciences and health professions, dealing extensively with complex human behaviours, social structures and health outcomes, may see AI and GenAI as a powerful tool for uncovering insights that are not readily accessible through traditional methods.

However, the interviews reveal a nuanced view of AI and GenAI I’s role, particularly in disciplines, such as the humanities and mental health, focused on the human experience. Here, the sentiment is that AI and GenAI, despite their capabilities, should not overshadow the human element integral to these fields. The statement from a mental health academic, ‘I don’t see how the AI gets any kind of priority over the human spirit’, reflects a broader debate about the balance between technological advancement and the preservation of inherently human qualities and insights that define these disciplines.

The disparity in familiarity with AI and GenAI and the attitudes towards its integration into academic training may reflect underlying assumptions about the role of technology in different fields. In engineering and exact sciences, the affinity for AI and GenAI could be attributed to the direct applicability of these tools in research and problem-solving within these disciplines. Conversely, the emphasis on AI training in the social sciences and health professions, despite lower usage, indicates a recognition of the transformative potential of AI and GenAI in enhancing qualitative research, improving healthcare outcomes and informing social policy, albeit with a cautious approach to its application.

Perceived benefits of AI and GenAI in teaching and research

Additionally in our comprehensive study, we specifically explored the perceived advantages of AI and GenAI in both teaching and research domains. Participants in the study (n = 666 for teaching context; n = 670 for research context) were asked to identify the three most significant advantages of using AI and GenAI from a predetermined list. In the teaching context, the top advantages identified were streamlining administrative tasks (44.1%), providing personalised instruction and assessment (42.8%) and systematically tracking learning process outcomes (40%). For research, the leading advantages noted were support in data analysis and interpretation (54%), assistance in literature writing (49%) and the development of prediction models based on existing data to guide future research (46%).

The preference for these particular advantages reflects a broader recognition of AI and GenAI’s role in augmenting the academic workflow by enhancing efficiency, personalisation and analytical capacity. Streamlining administrative tasks, for example, addresses a critical need within academia to reduce the bureaucratic burden on educators and researchers, thereby allowing more focus on core educational and scholarly activities. Similarly, the emphasis on personalised instruction and the systematic tracking of learning outcomes speaks to an increasing demand for adaptive learning environments that cater to individual student needs and learning paths.

In research, AI and GenAI’s capacity to support data analysis and interpretation, literature writing and predictive modelling underscores the growing importance of data-driven approaches in academic enquiry. These tools not only expedite the research process but also enhance the depth and breadth of analytical capabilities, facilitating more nuanced and comprehensive investigations.

Interviews conducted as part of our study further highlighted the multidimensional impact of AI and GenAI integration across administrative, research and pedagogical domains. Administratively, AI and GenAI is seen as a tool for optimising budgetary management and enhancing operational efficiency. In research, AI and GenAI’s potential to augment brainstorming, data analysis and literature review processes was emphasised, aligning with the quantitative findings regarding AI’s advantages in supporting core research functions. Pedagogically, the application of AI in developing personalised education models, such as adaptive learning tools and immersive multimedia resources, was noted for its potential to enrich learning experiences.

Parallel findings from the broader academic literature underscore these advantages. Studies by Kurtz et al. (2024), Haensch et al. (2023), Ravi Kumar and Raman (2022), and Bonsu and Baffour-Koduah (2023) have documented similar recognitions of AI and GenAI’s potential to support academic tasks, generate ideas and clarify study content, with specific applications like ChatGPT being highlighted for their role as academic assistants. These studies collectively affirm the benefits of AI and GenAI in enhancing administrative efficiency, research capacity and pedagogical innovation, albeit with an awareness of the ethical considerations and challenges associated with AI’s integration into academic settings.

The disparity in perceptions regarding the benefits of AI and GenAI, specifically in the context of analysing written language using natural language processing (NLP), between men and women uncovers a deeper dialogue about gender and professional background in the engagement with AI and GenAI technologies. Our findings indicate that a higher proportion of men (31%) recognise the advantage of AI and GenAI in NLP tasks than women (24%). This difference can be attributed to the varying degrees of involvement and exposure to AI and GenAI and machine learning within professional and academic realms, especially in fields traditionally dominated by men.

NLP is a subset of AI that focuses on the interaction between computers and humans through natural language. The goal of NLP is to enable computers to understand, interpret and produce human languages in a valuable way. The higher valuation of AI’s capabilities in NLP by men could reflect their greater presence in STEM fields, where such technologies are more prevalent and emphasised.

STEM fields have historically seen a gender imbalance, with men significantly outnumbering women, particularly in engineering- and technology-related disciplines. This imbalance likely influences the level of exposure and familiarity individuals have with AI technologies, including NLP. Men, being more prevalent in these fields, may have more opportunities to interact with and appreciate the complexities and utilities of AI tools, leading to a higher recognition of their advantages in specific tasks like language analysis. On the contrary, the underrepresentation of women in STEM may limit their exposure to the cutting-edge applications of AI, including NLP, affecting their perceptions of these technologies’ potential benefits. This gap underscores the necessity of initiatives aimed at encouraging women’s participation in STEM fields, not only to bridge the gender disparity but also to enrich the diversity of perspectives and innovations within AI development and application.

Addressing this perceptual gap requires a multifaceted approach, including education and advocacy to demystify AI and GenAI technologies for those outside the traditional STEM fields and to highlight AI’s interdisciplinary applications. Promoting equitable access to AI and GenAI education and resources, alongside fostering a culture that values diversity in AI and GenAI development, can contribute to a more inclusive understanding and appreciation of AI and GenAI’s potential across all genders and professional backgrounds.

The observed differences in perceptions of the benefits of AI and GenAI also emerged between lecturers and students. Lecturers view AI and GenAI as a significant asset in enhancing student engagement and persistence (32%) and as a consultant for their teaching strategies (48%). This perspective likely stems from lecturers’ responsibility to design and deliver educational content that is engaging and effective. AI’s and GenAI’s ability to act as a consultant reflects a desire for tools that can offer personalised insights and suggestions to improve teaching methodologies, thereby directly influencing the quality of education. The emphasis on improving student engagement and persistence indicates an acute awareness of the challenges in maintaining students’ interest and motivation, areas where AI and GenAI can provide innovative solutions through interactive and adaptive learning experiences.

Conversely, students prioritise the systematic tracking of learning outcomes (48%) and enhancing accessibility and inclusion (32%) as the principal benefits of AI and GenAI in education. These preferences highlight students’ focus on tangible, individualised learning achievements and the importance of creating an equitable educational environment. Students’ emphasis on tracking outcomes suggests a keen interest in monitoring their progress and receiving personalised feedback, which AI and GenAI can facilitate through data analytics and adaptive learning systems. Similarly, the value placed on accessibility and inclusion reflects a broader understanding of AI’s potential to democratise education, making learning more adaptable to diverse needs and backgrounds.

In research, the divergence continues, with lecturers perceiving assistance in literature writing (58%) as more beneficial, underscoring the significance of AI in streamlining the extensive and often tedious process of academic writing and literature review. This contrasts with students, who value AI’s role in data analysis and interpretation (57%), the creation of prediction models (54%) and facilitating research collaborations (33%). These differences can be attributed to students’ developmental stage in their academic careers, where the emphasis is on learning and applying research methodologies, understanding data and building networks for future research opportunities.

These divergences in perception illuminate the multifaceted applications of AI and GenAI across the academic landscape, reflecting varied priorities that align with the specific roles and objectives of lecturers and students. While lecturers focus on AI and GenAI as a tool for enhancing teaching effectiveness and efficiency, students see AI and GenAI as instrumental in personalising and democratising their learning and research experiences. Addressing these diverse needs requires a nuanced approach to integrating AI in academia, ensuring that the deployment of AI and GenAI technologies resonates with and benefits the entire academic community.

Potential risks of using AI and GenAI in academia

The utilisation of AI and GenAI in academia brings to light an array of potential risks that span across various aspects of teaching and research. Participants in our study highlighted concerns including an overreliance on technology, biases in algorithms and data, transparency issues in AI and GenAI processes and privacy and data security worries. Distinct differences in perception were noted between lecturers and students, with lecturers more concerned about algorithmic bias and transparency, while students expressed apprehension about AI and GenAI’s impact on work efficiency and academic positions.

To navigate these challenges, the approach adopted by Harvard University provides a valuable blueprint. Harvard emphasises the importance of data privacy and security, the implementation of review protocols for AI-generated content, the adaptation of academic integrity policies and the enhancement of awareness around AI-enabled security threats. These measures, aimed at responsibly leveraging GenAI’s potential, seek to balance the benefits of AI with the imperative to mitigate associated risks, thereby fostering an environment of ethical standards and innovation within academia (Dwivedi et al., 2023; Feuerriegel et al., 2024).

Our study findings echo the critical need for such comprehensive frameworks. Concerns over plagiarism, the potential for superficial work and the erosion of critical thinking skills underscore the necessity for maintaining human oversight in AI and GenAI’s academic applications. This oversight ensures that AI tools are used to complement human capabilities, avoiding overdependence and ensuring the reliability of AI-generated or curated information. The risks highlighted by lecturers and students alike point to the essential role of clear guidelines and reporting mechanisms in guiding the ethical integration of AI and GenAI tools in educational settings.

Drawing upon the guidelines established by Harvard and the insights from academic research, it becomes evident that mitigating the risks associated with AI and GenAI in academia requires a multifaceted strategy. This strategy should not only address the practical challenges of AI and GenAI integration but also uphold the principles of ethical use, transparency and data security. By adopting such an approach, educational institutions can leverage AI and GenAI’s benefits for teaching and research while navigating the ethical and practical complexities inherent in these technologies.

One of the inherent limitations of our study is that the sample may not be fully representative of the entire population of lecturers and students. This limitation acknowledges that while our sample is substantial, offering a rich and insightful exploration into the perceptions and attitudes towards AI and GenAI in academia, it does not encompass every viewpoint within the diverse academic community. However, this does not detract from the value and depth of the findings. The inclusion of a qualitative validation through interviews with experts in the field significantly enriches our understanding, providing nuanced insights that enhance the overall contributions of this study. This methodological approach, blending quantitative and qualitative data, allows for a more comprehensive exploration of the complex dynamics surrounding AI adoption in educational settings, highlighting varied perspectives and experiences while acknowledging the scope limitations of this study.

The shift in the lecturer’s role due to AI and GenAI development

The integration of AI and GenAI into the academic landscape is a multifaceted endeavour that bridges technological innovation with pedagogical practice. Drawing insights from our study on AI and GenAI’s perceptions and applications in academia, the critical need for aligning AI’s capabilities with educational values becomes apparent. In navigating this complex terrain, the diffusion of innovations theory by Rogers (2003) offers a compelling framework for understanding how and why AI can be more effectively adopted within academic settings.

Rogers’ theory, detailed in his seminal work, ‘Diffusion of Innovations’ (Rogers, 2003), elucidates the process through which new ideas and technologies are adopted by a community. At its core, the theory posits that the adoption of an innovation is influenced by factors such as its perceived advantage, compatibility with existing values and practices, simplicity of use, trial ability and observable results. Applying Rogers’ framework to AI and GenAI in academia, the pathway to successful integration necessitates demonstrating AI’s potential benefits, aligning it with the educational ecosystem’s intrinsic needs, ensuring ease of use, enabling experimentation and highlighting tangible outcomes.

Conclusions

Future research endeavours should delve into the development and empirical evaluation of alternative assessment methods that leverage AI and GenAI’s strengths. By embracing AI and GenAI’s capacity for personalisation and efficiency, academia can uncover novel assessment paradigms that resonate more deeply with diverse learning styles and objectives. For example, adaptive testing platforms that utilise AI to adjust the difficulty of questions in real time based on student responses can be developed. Additionally, AIdriven formative assessments that provide immediate, personalised feedback to students can help identify learning gaps and suggest targeted interventions.

This exploration should extend to the ethical dimensions of AI and GenAI use, particularly the mitigation of biases and the safeguarding of data privacy and security. Establishing robust ethical guidelines and practices is paramount to fostering trust and integrity in AI and GenAI applications within educational settings. Institutions should create interdisciplinary committees that include ethicists, educators, technologists and students to draft comprehensive ethical guidelines. These guidelines should cover aspects such as data privacy, algorithmic transparency and bias mitigation. Regular workshops and training sessions should be conducted to ensure that all stakeholders are aware of and adhere to these guidelines. Additionally, institutions should implement AI ethics review boards to oversee the deployment of AI technologies in academic settings.

The relevance of AI and GenAI to distance learning and e-learning is profound. These technologies can significantly enhance accessibility by providing personalised learning experiences tailored to individual needs, thus making education more inclusive. For instance, AI-driven adaptive learning systems can adjust the difficulty level of educational content based on the learner’s progress, ensuring that each student receives the appropriate level of challenge and support. This personalisation can lead to improved learning outcomes and greater student engagement. Furthermore, AI and GenAI can streamline administrative tasks, such as grading and scheduling, thereby allowing educators to focus more on instruction and student interaction. These technologies also facilitate interactive and adaptive learning experiences, which are crucial in distance learning environments where traditional face-to-face interaction is limited. By incorporating AI and GenAI, educational institutions can create more dynamic and responsive learning environments that cater to diverse student populations.

To address the knowledge gap and gender disparities highlighted in this study, future research should investigate effective strategies for AI and GenAI education and training across disciplines and demographics. This could involve the development and evaluation of targeted initiatives to enhance AI and GenAI literacy, promote equitable access to AI and GenAI resources and cultivate a culture of diversity and inclusion in AI and GenAI development and application. Specific initiatives aimed at encouraging female students and faculty members to engage with AI technologies, such as mentorship programmes and scholarships, should be implemented. Moreover, integrating AI literacy into the core curriculum across all disciplines can ensure that all students, regardless of their field of study, gain a basic understanding of AI technologies.

Furthermore, exploring the potential of interdisciplinary collaborations and knowledge-sharing could help bridge the divide between STEM and non-STEM fields, fostering a more comprehensive understanding and appreciation of AI and GenAI’s potential across academia. Establishing partnerships with international institutions can foster global academic collaboration and knowledge exchange. For instance, creating joint research projects and student exchange programmes with universities in different countries can provide diverse perspectives on AI integration. Additionally, interdisciplinary research centres dedicated to AI and its applications in various fields can be established to promote collaboration between different academic departments.

An interdisciplinary and cross-cultural approach to studying AI and GenAI’s role in academia can illuminate the diverse ways AI and GenAI can be adapted and embraced across different contexts, underscoring the importance of compatibility and trial ability in Rogers’ theory. Such research can reveal the potential for AI and GenAI to foster global academic collaboration, enhance accessibility and tailor learning experiences to diverse student needs, thereby demonstrating the observable results of AI and GenAI integration.

Moreover, emphasising AI and GenAI as a tool for augmenting critical thinking and creativity aligns with Rogers’ notion of observable results, showcasing AI and GenAI’s capacity to enhance educational outcomes beyond administrative efficiency. By framing AI as a complement to human intellectual efforts, this approach highlights the technology’s role in facilitating innovative teaching and research practices, encouraging a reevaluation of pedagogical strategies to integrate AI and GenAI meaningfully. Educational strategies should be developed to use AI to enhance critical thinking and creativity among students. For example, AI-powered platforms that facilitate collaborative problem-solving and creative projects can be integrated into the curriculum. Encouraging students to use AI tools to explore new ideas, create art and develop innovative solutions to real-world problems can foster a culture of creativity and innovation.

Recommendations

Future studies should also explore the long-term impacts of AI and GenAI integration on academic performance, student engagement and research productivity across various disciplines and cultural contexts. Longitudinal research designs could help identify the factors that contribute to successful AI and GenAI adoption and the potential barriers to its effective implementation. These studies should track cohorts of students and faculty over several years to identify factors that contribute to successful AI adoption and the barriers that hinder its effective implementation. Findings from these studies can guide institutional policies and investment decisions, ensuring that resources are allocated effectively to support AI integration. For example, a 5-year longitudinal study could follow undergraduate students from their first year through graduation and into early career stages, examining how exposure to AI tools impacts their academic performance, skill development and career readiness. The study could compare outcomes between students in AI-integrated programmes versus traditional curricula across multiple disciplines.

In addition to the above, future studies should consider developing alternative assessment methods, such as adaptive testing platforms that utilise AI to adjust the difficulty of questions in real time based on student’s responses. Additionally, AI-driven formative assessments can provide immediate, personalised feedback to students, helping identify learning gaps and suggesting targeted interventions.

Ethical guidelines and practices are essential for the responsible integration of AI. Interdisciplinary Committees that include ethicists, educators, technologists and students should draft comprehensive ethical guidelines covering data privacy, algorithmic transparency and bias mitigation. Furthermore, AI Ethics Review Boards should be implemented to oversee the deployment of AI technologies in academic settings. A model for such an initiative could be the establishment of a university-wide AI Ethics Committee, comprising representatives from philosophy, computer science, law, education and student government. This committee could develop a code of ethics for AI use in academia, addressing issues such as data privacy in learning analytics, transparency in AI-assisted grading and mitigating bias in admission algorithms.

The relevance of AI and GenAI to distance learning and e-learning is profound. Adaptive learning systems can significantly enhance accessibility by providing personalised learning experiences tailored to individual needs, making education more inclusive. For example, these systems can adjust the difficulty level of educational content based on the learner’s progress, ensuring that each student receives the appropriate level of challenge and support. Additionally, AI and GenAI can streamline administrative tasks such as grading and scheduling, thereby allowing educators to focus more on instruction and student interaction. A case study could examine the implementation of an AI-powered adaptive learning platform in large-scale online courses such as a MOOC. The study could analyse how the system personalises content delivery, provides targeted support and impacts learning outcomes for diverse student populations, including those with disabilities or non-traditional educational backgrounds.

To address the knowledge gap and gender disparities highlighted in this study, future research should investigate effective strategies for AI and GenAI education and training across disciplines and demographics. Targeted training programmes should be developed to enhance AI literacy among faculty and students, promoting equitable access to AI resources and fostering a culture of diversity and inclusion in AI development. Moreover, AI literacy should be integrated into the core curriculum across all disciplines to ensure that all students, regardless of their field of study, gain a basic understanding of AI technologies.

Furthermore, exploring the potential of interdisciplinary collaborations and knowledge-sharing could help bridge the divide between STEM and non-STEM fields, fostering a more comprehensive understanding and appreciation of AI and GenAI’s potential across academia. International partnerships can foster global academic collaboration and knowledge exchange through joint research projects and student exchange programmes. Additionally, interdisciplinary research centres dedicated to AI applications in various fields can promote collaboration across academic departments.

Promoting critical thinking and creativity is crucial in the integration of AI in education. AI-powered platforms can facilitate collaborative problem-solving and creative projects. For instance, students can use AI tools to explore new ideas, create art and develop innovative solutions to real-world problems. Encouraging exploration through such platforms can foster a culture of creativity and innovation, ensuring that students are not merely passive recipients of knowledge but active participants in their learning processes. An example project could be a university-wide ‘AI Innovation Challenge’, where interdisciplinary student teams use AI tools to address real-world problems. For instance, teams could use NLP to analyse social media data for early detection of mental health issues among college students or develop AI-assisted design tools for sustainable urban planning. These projects would encourage students to think critically about AI’s capabilities and limitations while fostering creativity and practical problem-solving skills.

Ultimately, the goal of future research should be to develop a comprehensive, globally relevant framework for the responsible and equitable integration of AI and GenAI in academia. This framework should prioritise the enhancement of human capabilities, the promotion of ethical standards and the cultivation of a culture of lifelong learning and adaptability in the face of rapid technological change. It should include guidelines for curriculum design, faculty development, infrastructure investment and policy-making, ensuring that the benefits of AI are accessible to all members of the global academic community. By pursuing these research directions, academia can harness the transformative potential of AI and GenAI to revolutionise education and research, while ensuring that the benefits are shared by all members of the global academic community. By fostering a comprehensive and inclusive approach to AI integration, academia can ensure that the transformative potential of AI and GenAI not only enhances educational outcomes but also promotes ethical and responsible use, bridging gaps and fostering equity in the academic community.

Język:
Angielski
Częstotliwość wydawania:
2 razy w roku
Dziedziny czasopisma:
Nauki społeczne, Edukacja, Program nauczania i pedagogika, Edukacja, inne