Abstract

This study is aimed at exploring the impact of artificial intelligence (AI) on academic research by conducting a focus group research strategy. The focus group consists of individuals who are actively involved in academic research and have experience working with AI technologies. The purpose of the focus group is to gather in-depth insights into how AI has influenced research methodologies, findings, and overall knowledge creation. The study will begin by identifying seven participants through purposive sampling, with an aim of recruiting a diverse group of individuals from various academic disciplines. Purposive sampling, also known as selective sampling, enhances the study’s validity by ensuring that the sample consists of individuals with a high level of expertise in the subject matter. Seven is large enough to generate a diverse range of perspectives and experiences and small enough to ensure that every participating academic researcher has a chance to contribute to the conversation. The focus group is conducted using a Zoom video conferencing to gather academics from different institutions across the world. It also eliminates distance issue required for conducting an in-person session. This provides opportunity to cover a wide array research specialization representation. Data analysis is conducted using a thematic analysis approach, with a focus on identifying key themes and patterns that emerge from the data. The findings of this study contribute to a better understanding of the impact of AI on academic research and provide insights into the potential future direction of AI in academic research. While the study is aimed at providing practical recommendations for researchers who are interested in incorporating AI into their research practices, it also ignites the conversation on future incorporation of technologies into academic research activity.

Keywords: academics; academic research; AI; artificial intelligence; scholarly research

1. Introduction

Artificial intelligence (AI) refers to the use of computer algorithms and statistical models to process, analyze, and interpret data in research and teaching [1]. AI is rapidly transforming many aspects of modern society, including academic research [2]. As researchers increasingly debate AI technologies in academia, it is important to understand the impact of these technologies on research methodologies, findings, and overall knowledge creation.

AI has increasingly become a transformative force in various aspects of mainstream public life, from healthcare and transportation to finance and entertainment [37]. Advanced algorithms power everything from recommendation engines on streaming services to diagnostic tools in medicine, shaping the way people consume content, make decisions, and even understand the world [8]. In the academic sphere, AI’s influence is particularly profound, ushering in a new era of research methodology and data analysis. Cutting-edge machine learning algorithms and natural language processing (NLP) technologies are aiding academics in tasks ranging from literature reviews to complex data interpretation, thereby not only increasing the speed and efficiency of research but also opening up new avenues for inquiry that were previously unimaginable [9]. The application of AI in academia has the potential to revolutionize traditional research paradigms, and provided ethical considerations and methodological rigor are aptly addressed.

This study is aimed at exploring the impact of AI on academic research through a focus group research strategy. By gathering insights from scholars who are actively involved in academic research and have experience working with AI technologies, we hope to gain a better understanding of the impact, benefits, and/or challenges of using AI in academic research, the ethical implications of AI in research, and the potential for AI to transform the academic research landscape.

One of the main advantages of using AI in academic research is the ability to process and analyze large amounts of data quickly and efficiently [1013]. This can be particularly useful in fields such as biology, medicine, and social sciences, where large datasets are common. AI can help researchers identify patterns, trends, and relationships that may be difficult to detect using traditional research methods [10]. The use of AI in research also presents several challenges and potential ethical concerns. For example, there may be biases in the data used to train AI models, which can result in biased or inaccurate findings [14]. Additionally, there may be concerns around the transparency and interpretability of AI-generated results, as well as the potential for AI to replace human researchers or contribute to job displacement [2, 15]. Through the focus group research strategy, we aim to engage in a critical dialogue around these issues and gain a deeper understanding of the impact of AI on academic research. By incorporating the perspectives and experiences of scholars from diverse academic disciplines and location, we hope to provide practical recommendations for researchers who are interested in incorporating AI into their research practices.

1.1. Theoretical Background

AI has increasingly become an integral part of academic research, offering transformative possibilities in data analysis, literature review automation, and predictive modeling [9]. The application of AI algorithms and machine learning techniques has the potential to revolutionize the research landscape by enhancing efficiency, accuracy, and depth of inquiry [10]. Underpinning these applications are theories such as machine learning theory, which explores algorithms’ ability to learn from and make predictions based on data, and NLP, a subfield of AI focused on enabling machines to understand and interpret human language [16]. These technologies are making it possible to handle vast amounts of data and complex calculations that would be insurmountable or time-consuming for human researchers. AI’s role in academic research has been supported by foundational frameworks like decision support systems and information processing theory, providing a theoretical basis for its utility in supporting complex decision-making processes and handling voluminous information [17, 18]. Therefore, AI’s contribution to academic research is not merely practical but also theoretically grounded, offering new horizons for exploration and understanding. There is limited research in the AI, relative to academic research. AI is a growing field, but its significance cannot be denied [19].

Collins et al. [20] presents a systematic literature review of AI research in information systems (IS) between 2005 and 2020, providing an identification of the current reported business value and contributions of AI, research and practical implications on the use of AI, and opportunities for future AI research in the form of a research agenda. The paper utilized a systematic literature review of AI in IS. It analyzes the research methods and data collection techniques used in primary studies and categorizes the contributions of these studies. The study argues that a large proportion of research on AI in IS is focused on decision-making and that certain research methods, such as case studies and surveys, are more commonly used than others. Additionally, the paper identifies several gaps in the current research, such as the need for more longitudinal studies and closer attention to ethical issues in AI [20].

Gendron et al. [21] reflects on the potentially negative implications of AI in academic publishing, particularly the potential for AI to deskill or replace human involvement in key academic activities such as journal editing and reviewer selection. The authors argue that it is important for researchers worldwide to document, reflect, and debate the implications of AI on academic publishing in order to better understand the impact of these technologies on the future of academic publishing. Based on the analysis of an email solicitation they received at CPA to highlight their concerns, the authors developed a subsection regarding the potentially harmful impact of the excessive reliance on citations and bibliometric analyses in academic research, particularly in the field of accounting. Overall, the authors express their anticipation of a decline in author’s role in the research process due to AI automation, which they support with their observations of AI-based literature reviews that put undue emphasis on citation count [21].

In a 2023 survey-based descriptive study, “Impact and Perceived Value of the Revolutionary Advent of Artificial Intelligence in Research and Publishing Among Researchers,” Thomas et al. [9] sought to investigate how researchers perceive the impact of AI on research. The study involved a global survey of researchers, authors, editors, publishers, and other stakeholders in the scholarly community. The survey is aimed at understanding the impact of the AI wave in the scholarly publishing domain. The survey results revealed that while plagiarism detection was the most widely known AI-based application, image recognition, data analytics, and language enhancement were other known applications of AI. The study found that while AI is recognized as a valuable tool for data analysis and visualization, there is still a need for further education and training to fully utilize its potential [9].

1.2. Research Gap

Despite the growing body of literature on the topic, there is still a research gap when it comes to understanding the specific impact of AI on academic research practices and outcomes. There is a need for more empirical research to understand how AI is actually being used in academic research and what its specific impacts are on research methodologies, findings, and overall knowledge creation. This proposed focus group research strategy is aimed at addressing this research gap by gathering in-depth insights from scholars who are actively involved in academic research and have experience working with AI technologies.

Therefore, this study seeks to answer the following questions: RQ1: What is the impact of AI on research methodologies and findings in academic research?RQ2: What are the potential ethical implications of using AI in academic research, and how can these be addressed?

The following section delves into existing works on the role of AI in educational and academic research/pursuit. This provides a comprehensive foundation for the current investigation into perceptions and practices surrounding AI in research methodologies and outcomes. This is followed by theoretical and conceptual framework. Next, the Methodology section will describe purposive sampling approach and thematic analysis methods to study the opinions and practices of academic researchers. In the Findings and Implications section, we will reveal crucial insights derived from interviews with higher education professionals, focusing on the absence of a shared definition for “at-risk” students and the themes arising from this ambiguity. The Discussion section will interpret these findings in the context of both the objectives of this study and the existing literature, exploring challenges and ethical concerns associated with AI in academic research. Finally, the Conclusion section will summarize the key insights, propose recommendations for addressing identified challenges, and suggest directions for future research.

2. Literature Review

AI is a rapidly evolving field that has the potential to revolutionize many aspects of modern society, including academic research. In recent years, there has been growing interest in exploring the impact of AI on academic research, with several studies examining the benefits and challenges of using AI in research, as well as its potential future directions. In this context, a comprehensive literature review of related papers provides valuable insights into the application of AI in various research aspects under the following subheadings.

2.1. The Role of Big Data and AI in Education and Education Research

In the realm of education and academic research, the role of big data and AI has been a subject of increasing focus. Sun et al. [22] conducted a pivotal study on the interplay between big data and AI, emphasizing the transformative capacity these technologies hold for educational settings. Using a methodical review approach, their paper employed VOSviewer to map out 980 related articles, ultimately identifying key clusters of research within multidisciplinary contexts, education technology, and information sciences. Their work pinpoints central research topics, including learning analytics, intelligent tutoring systems, and collaborative learning. Although their research establishes the critical groundwork for educators, researchers, and policymakers, it also stresses the need for future work, especially regarding the development of new methodologies.

However, the paper falls short in deeply examining the methodological dimensions of AI application in academic research—a gap that this current study aims to address. Additional literatures such as du Boulay [23], Flores-Vivar and García-Peñalvo [24], and Williams et al. [25] complement this by calling attention to the nuanced ethical and practical considerations in implementing AI in educational research, thus providing a fuller context for understanding. These seminal works collectively underscore the methodological and ethical complexities involved, thereby informing the current study’s focus on the practical and theoretical aspects of AI application in academic research.

2.2. AI Technologies for Education

Zhang and Aslan [26] study focuses on the burgeoning role of AI technologies in the field of education, examining how such tools can elevate learning experiences, foster improved educational outcomes, and highlight opportunities for improvement. Using a rigorous methodology that involved multiple database searches, their study finds that AI technologies can improve student engagement, motivation, and academic performance through personalized learning experiences, real-time feedback, and fostering collaboration. Their research makes the important recommendation that future work should not only address ethical issues but also take a more interdisciplinary approach to understanding how AI influences pedagogy, assessment, and learning environments. While the study adds considerably to the understanding of AI’s capabilities within an educational context, it stops short of examining AI’s wider implications in academic research—a void that this current study aims to fill. Additionally, other works like Su-Yeon Park et al. [27] and Seo et al. [28] broaden the conversation by focusing on the ethical and data privacy issues around using AI in educational and research settings, which this current study also seeks to address.

2.3. The Impact of AI on Research

Weigel et al. [29] took a broad perspective in examining the impact of AI on multiple research domains. Through a literature review that assimilated findings from a diverse set of disciplines, the authors identified both challenges and opportunities in weaving AI into various research methodologies. Their emphasis on the necessity of interdisciplinary collaborations and ethical considerations provides a significant backdrop for the current study, specifically in understanding how these considerations play out in academic research involving AI. However, Weigel et al. fell short of examining the nuanced challenges and opportunities that AI presents within specific fields of research, an area this current study seeks to expand upon. Complementary to their work, additional studies like Varsha [30] and Bernal and Mazo [31] further delve into the issue of data transparency and ethical conduct in AI research, topics that will also be addressed in this investigation.

2.4. AI and the Conduct of Literature Reviews

Wagner et al. [32] focus on the role of AI in the specific task of conducting literature reviews, offering a framework for evaluating AI tools to enhance the literature review process. They detail how AI technologies, particularly NLP, can automate and refine various steps in the literature review, from problem formulation to data extraction. Their emphasis on the need for human expertise in tandem with AI for interpretation serves as a cautionary note, highlighting the balance that must be maintained between technology and human judgment. Although their work is instructive for improving efficiency and accuracy in literature reviews, it stops short of examining the broader implications, challenges, and opportunities of AI in the landscape of academic research. This gap leaves room for this current study to explore how their findings about the use of AI in literature reviews can be extended to broader research methodologies, an area also underscored by research from other scholars like Ahmad et al. [33] and Ahmad et al. [34] who delve into AI’s impact on data analytics in academic research.

This current study is aimed at exploring the application and impact of AI in academic research, drawing upon the foundations laid by the articles reviewed in the literature. Each piece of literature provides a different perspective and element of understanding that contributes significantly to the foundation, design, execution, and interpretation of the current study’s findings.

The paper by Sun et al. [22] provides valuable groundwork in the role of big data and AI in education and research, thereby reinforcing the understanding of AI’s potential in academic research. By pointing out the necessity for further exploration and the development of new methodologies, this research provides a solid basis for this study to delve into the methodological aspects of AI application in academic research. The research by Zhang and Aslan [26] underscores the growing importance of AI technologies in education, thereby setting the stage for the current study to explore AI’s broader impact on academic research. Their findings around enhancing learning experiences, improving educational outcomes, and identifying areas for improvement using AI can significantly inform the design and application of AI in academic research within our study. Weigel et al. [29] shed light on the influence of AI on various research fields, which offers a broader perspective on the challenges and opportunities that AI presents in different research areas. Their emphasis on interdisciplinary collaboration and ethical considerations provides valuable insights into the potential implications of AI-driven research, thereby informing our study’s approach. Wagner et al. [32] delve into the application of AI in literature reviews. Their proposed framework for evaluating AI tools can guide the current study in assessing AI’s role in literature reviews. Additionally, their emphasis on the balance between AI and human judgment echoes the current study’s approach in assessing the role of AI in academic research.

Each article contributes a different aspect to the understanding of AI’s role in academic research. By drawing on these insights, this study can extend the existing knowledge base and provide a comprehensive overview of AI’s application, challenges, and potential in academic research.

2.5. Theoretical Foundations

This study is based on the Technological Acceptance Model (TAM). The TAM is a well-established theoretical framework that is commonly used to understand the adoption and use of new technologies, including AI [35]. The TAM proposes that the acceptance and use of a technology are influenced by two key factors: perceived usefulness (PU) and perceived ease of use (PEOU) [35]. PU refers to the extent to which a technology is seen as beneficial for achieving specific goals or tasks. In the context of academic research, PU may be influenced by factors such as the ability of AI to process and analyze large amounts of data quickly and efficiently, or its potential to enhance research processes and improve research outcomes [36]. PEOU refers to the extent to which a technology is seen as easy to use and learn. In the context of academic research, PEOU may be influenced by factors such as the availability of user-friendly AI tools or the level of technical expertise required to use AI in research [36]. Applying the TAM to the current study’s hypotheses allows us to explore how the use of AI in academic research may affect researchers’ perceptions of AI technologies and, consequently, their willingness to adopt these tools in their work.

The TAM provides a robust theoretical backdrop for this focus group discussion. It suggests that technology adoption is primarily influenced by two factors: PU and PEOU. These discussion themes align seamlessly with these foundational TAM principles. PU: Use of AI tools for academic research purpose discussion theme explores current adoption rates and practical applications, aligning with TAM’s idea of usefulness in technology. PU of AI in academic research discussion theme directly corresponds to TAM’s “PU,” examining how researchers see the advantages of AI in academic work. AI contributions to research finding discussion theme look at the output quality, effectively extending the notion of PU into tangible outcomes. Potential future impact of AI in academic research discussion theme adds a future-oriented dimension to PU, examining expectations of AI’s role in academia.PEOU: Challenges and barriers to the adoption of AI in academic research discussion theme identify the hurdles that may impact the perceived ease of using AI, as per TAM. AI-impacted role of human researchers in academic research discussion theme evaluates how the role of human researchers could change with AI adoption, affecting the technology’s PEOU.Extensions to PU: Limitations of AI in academic research discussion theme delve into factors that might limit AI’s PU, like data selection bias. AI and academic research unbiased concern discussion theme investigates the quality and impartiality of AI-generated research. Risk associated with AI and academic research finding discussion theme discusses risks that could affect the PU and thereby the adoption of AI.Ethical and social extensions: Ethical concerns of AI in academic research discussion theme addresses ethical considerations, which can influence both PU and PEOU. Ethical responsibility of academic institutions/researchers in AI deployment discussion theme examines the broader ethical framework, a concern that extends the traditional TAM model to include ethical accountability.

By integrating these focus group discussion themes within the TAM framework, we aim to not only validate the model’s core principles but also extend its applicability to include ethical, risk-related, and future-oriented factors. This comprehensive approach is aimed at facilitating a multidimensional understanding of the adoption and impact of AI in academic research.

2.6. Conceptual Framework

This conceptual framework is developed to illustrate how AI impacts academic research. The TAM posits that PU and PEOU are the two main determinants of an individual’s intention to use a technology, which in turn influences actual usage behavior.

This study proposed conceptual framework that consists of the following components: PU: The degree to which a researcher believes that using AI will enhance their research performance. This can include factors such as improved data analysis, increased accuracy, and time-saving capabilities.PEOU: The extent to which a researcher believes that using AI will be free of effort. This can include factors such as user-friendly interfaces, accessibility, and the availability of training and support resources.Actual use of AI: The extent to which a researcher implements AI in their research, which is influenced by their intention to use AI.Research outcomes: The impact of AI on the quality, efficiency, and effectiveness of academic research, which is influenced by the actual use of AI.

Figure 1 demonstrates the relationships between these components, showing how the TAM can be applied to understand the adoption and impact of AI in academic research. By assessing Actual Use of AI and Research Outcomes nexus, which is influenced by perceived usefulness and ease of use and researchers, this explains the influence of the successful integration of AI technologies in academic work. This, in turn, can inform the development of strategies to promote the responsible and effective use of AI in research, ultimately leading to improved research outcomes.

3. Methodology

Grounded in the philosophy of interpretivism, this study employed a qualitative research methodology to deeply understand and interpret the views and experiences of participants engaged in the usage of AI in academic research [37]. The research approach was influenced by the belief that knowledge is constructed through the diverse viewpoints and shared experiences of participants [38] and by the positionality as academic researchers working in a technology-oriented field.

Data collection was conducted using a focus group strategy involving academic researchers with varying levels of experience in AI usage. This method was chosen due to its appropriateness for generating rich, interactive conversations and insights needed to address the research questions [3941]. The focus group discussions were guided by a semistructured interview protocol, designed with open-ended questions to stimulate in-depth discussions and elicit participants’ experiences and perceptions [42].

Data analysis was conducted using thematic analysis, which involved organizing the collected data, identifying common language to generate initial codes, noting areas of significance in each transcript, and finally identifying patterns or themes. This iterative process allowed for the emergence of meaningful themes that reflect the participants’ perspectives and experiences with AI in academic research. These themes were then interpreted and presented in a way that is understandable and transferable to other academics and stakeholders in higher education.

This approach to data collection and analysis was designed to ensure the validity and reliability of the study. Measures were taken to ensure a diverse range of participants in terms of academic fields and experiences with AI. Furthermore, the use of member checking and peer debriefing ensured the accuracy and credibility of the data analysis [43].

3.1. Participant Recruitment

In order to ensure that the participants had practical exposure and understanding of AI usage in academic research, we developed specific inclusion criteria.

To participate in the study, individuals had to meet specific criteria aimed at ensuring the relevancy and depth of the insights collected. Participants were required to be academic researchers and faculty members or hold a PhD. In addition, they needed to have direct, hands-on experience with using AI tools within the context of their scholarly research. This stipulation was crucial for ensuring that the perspectives gathered were grounded in practical experience with AI technologies. Lastly, participants were expected to be articulate in sharing their individual experiences and viewpoints concerning the role and impact of AI in academic research. This last requirement is aimed at capturing nuanced opinions and fostering a rich, qualitative dataset for the study.

Upon implementing the necessary ethical consideration (as indicated below), we initiated recruitment by reaching out to potential participants who we believed met these criteria. From different universities and research institutions in the UK, Nigeria, and the United Arab Emirates, we identified seven individuals who satisfied the criteria. During the recruitment process, we provided a detailed explanation of the study, and all seven individuals were able to articulate their experiences and views regarding AI and agreed to participate in the study.

At the time of data collection, all participants were actively involved in academic research, and their expertise in AI varied, providing a diversity of perspectives. While participant demographic information was not a focus for this study, we ensured a diverse representation across different geographic regions and academic disciplines to enrich the perspectives in the study. We refrained from using elements such as race, ethnicity, or gender in the analysis to avoid any potential bias and ensure the focus remained on participants’ experiences and perspectives related to AI usage in academic research.

3.2. Ethical Considerations

Ethical considerations are important aspects of any research study, and this study on the impact of AI on academic research is no exception. The following are some ethical considerations that will be taken into account in this study: Informed consent: All participants in the focus group will be fully informed about the study and their participation in it. They will be provided with information about the purpose of the study, the data collection methods, and their rights as participants. Participants will be asked to provide written consent to participate in the study and will be informed that they can withdraw at any time without penalty.Confidentiality and anonymity: All data collected from the focus group will be kept confidential and anonymous. Participants will not be identified by name in any reports or publications resulting from the study. Only the research team will have access to the data collected, and it will be stored securely and confidentially.Ethical use of AI: The use of AI in the study will be conducted in an ethical and responsible manner. Any potential biases or limitations in the AI tools used will be acknowledged and addressed, and the results of the AI analysis will be verified by human researchers to ensure their accuracy and reliability.Respect for participants: The research team will ensure that all participants are treated with respect and dignity throughout the study. Participants will not be asked to engage in any activities that are harmful, distressing, or demeaning, and their views and opinions will be valued and respected.Ethical and legal implications: The research team will consider the ethical and legal implications of using AI in academic research. This includes issues such as data privacy and security, potential biases and discrimination, and the impact of AI on human labor and employment.

Overall, the ethical considerations in this study will be carefully planned and implemented to ensure that the rights and welfare of the participants are protected and that the study is conducted in an ethical and responsible manner.

3.3. Focus Group

In light of the need to gather nuanced insights about the perceptions and experiences of researchers regarding the use of AI in academic research, we chose to conduct focus group discussions. This format is effective for facilitating interactive conversations and drawing out diverse perspectives [40, 41]. We devised a semistructured discussion guide to explore the impact of AI on academic research practices and outcomes and to provide context for both academic and nonacademic audiences interested in the topic [42].

The team collaborated to develop open-ended questions that we believed would stimulate in-depth responses from the participants. These questions were shaped by the existing literature on AI in academic research, as well as professional experiences [22, 26, 29]. For instance, we asked questions like, “How has the use of AI impacted your research methodology and outcomes?” Consistent with the flexibility of the semistructured focus group discussions, we allowed for adaptability and followed the natural flow of the discussions [44].

The focus group discussions were conducted virtually via Zoom in June 2023, ensuring a comfortable and convenient environment for participants across different geographical locations. The session lasted between 60 and 90 min, varying depending on the depth and richness of the discussions. To accurately capture participants’ responses, all discussions were recorded with their consent.

Following informed consent, we began each discussion by asking questions about the participants’ experiences using AI in their academic research, setting the stage for the main focus of this study. Subsequently, we prompted participants to express their views on the impact of AI on research methodologies and findings, as well as potential ethical implications. We also asked them to share their suggestions for ensuring representative and unbiased AI model training. We probed further during the discussions to clarify and expand on responses, ensuring the depth and comprehensiveness of insights. This included asking participants to elaborate on their experiences with potential biases and limitations of AI in research and their ideas for mitigating these challenges. For instance, we asked participants to explain how data selection bias and algorithmic bias might occur in AI-powered research and how they could be addressed. This way, we could gain a richer understanding of the complexities involved in the use of AI in academic research.

3.4. Analysis

Upon the completion of each focus group discussion, we exchanged recordings and transcriptions within the team for review, ensuring accuracy. We particularly focused on instances where participants discussed the impact and implications of AI in their academic research, developing preliminary codes to highlight these instances. For instance, exploration of AI usage revealed that we needed to be receptive to participants’ attempts to navigate the complexities and challenges associated with AI applications in research.

We held numerous meetings to review our initial interpretations of these codes, aiming to eliminate confusion and redundancy. One such example was the code for “AI adoption.” Initially, this referred to the basic use of AI tools, but after extensive discussions, we realized participants were discussing adoption in the context of AI’s integration into research methodologies. This required the development of a more nuanced code. After achieving consensus on the definition and scope of codes, we held subsequent meetings to code the transcripts together, ensuring consistency. In these meetings, we also noted shared knowledge and language regarding AI use in academic research among the participants.

Next, we grouped codes that shared similar content and meaning into categories. For example, all the participants expressed viewpoints about the use of AI and its potential impact on research findings, reflecting on the concept from differing angles. We categorized these individual descriptions under “perceived impact of AI’ and grouped them together for further discussion. We then proceeded with a thematic analysis of the categorized data [45, 46]. This involved us examining each of the categories created from the codes. Through this process, we identified eight potential categories and compared them to the data to ensure their relevancy. For instance, we developed categories for “AI-enhanced efficiency” and “AI-induced biases.” After a thorough review, we found that the “AI-induced biases” category was more comprehensive and thus absorbed the “AI-enhanced efficiency” category.

In the end, we discarded four categories due to redundancy. We concluded that the remaining four categories accurately reflected common ideas among the participants and were meaningful to our research question. These four categories became the main themes.

3.5. Trustworthiness

To ensure the trustworthiness and credibility of this study, several strategies were implemented throughout the research process.

First, the study employed a rigorous methodology, with a clear articulation of the methods used for data collection and analysis. The focus group discussion design allowed for the capture of diverse perspectives and experiences regarding the use of AI in academic research, enhancing the richness and authenticity of the data collected.

Second, data triangulation was used as a key strategy to validate the findings. The perspectives gathered from participants were compared and contrasted to gain a more holistic understanding of the issues at hand. This strategy also minimized potential bias that could have emerged from relying on a single data source.

Third, the process of peer debriefing was regularly used during the data analysis stage. Regular discussions and consultations with other experienced researchers in the field ensured that the analysis process was transparent and rigorous, enhancing the reliability of the findings.

Fourth, an audit trail was maintained to document the research process in detail, from the initial design and data collection to the final analysis and interpretation of results. This allowed for a thorough review and verification of the processes used, bolstering the dependability of the study.

Lastly, the findings were returned to the participants for member checking, a process wherein the participants validated the accuracy of the data and interpretations. This further improved the study’s credibility and confirmed that the participants’ views and experiences were accurately represented.

The aforementioned strategies helped to ensure the study’s trustworthiness, enhancing the overall credibility, reliability, and validity of the research findings.

4. Findings and Implications

Thematic analysis of the perceptions and experiences shared by the focus group participants revealed interesting insights into the use and implications of AI in academic research. Despite the overall consensus regarding the transformative potential of AI, we found that its application is not as widespread as one might expect. The participants, for whom we have assigned pseudonyms, shared several concerns, opportunities, and challenges associated with AI use in their work.

To validate RQ1 (what is the impact of AI on research methodologies and findings in academic research?), our identified themes are as follows. Potential impact of AI on academic research.Actual use of AI tools for academic research purpose.Perceived usefulness of AI in academic research.AI contributions to research findings.

To validate RQ2 (what are the potential ethical implications of using AI in academic research, and how can these be addressed?), our identified themes are as follows. Limitations of AI in academic research.AI and academic research unbiased concerns.Risk associated with AI and academic research findings.Ethical concerns of AI in academic research.

Additional findings/themes identified are as follows. Potential future impact of AI in academic research.Challenges and barriers to the adoption of AI in academic research.AI-impacted role of human researchers in academic research.Ethical responsibility of academic institutions/researchers in AI deployment.

4.1. RQ1: What Is the Impact of AI on Research Methodologies and Findings in Academic Research?

The findings of the study across various discussions indicate a generally positive perception of the impact of AI on academic research, particularly in enhancing the efficiency and accuracy of data analysis [47]. Majority of the participants expressed strong confidence in AI’s potential, with a few being neutral or less convinced. This suggests that the academic community is open to the further incorporation of AI in research, specifically in the areas of methodology and data analysis [48]. Given this openness, further investments in AI-based tools for academic research are likely to be well received. However, the presence of neutral or less convinced participants points to the need for more information, education, and potential awareness campaigns about both the benefits and limitations of AI in an academic context.

When it comes to the actual use of AI tools in academic research, adoption among participants is not yet widespread [49]. Most participants have not used AI tools, signaling that there are barriers to adoption that need further exploration. These could include accessibility issues, lack of training, and resource constraints. This gap between positive perception and actual usage underlines the need for more comprehensive training programs and, perhaps, policy interventions to foster AI adoption [50].

The study also uncovered mixed opinions regarding the perceived usefulness of AI in improving research outcomes and knowledge creation. While most participants believe that AI can significantly contribute to research quality, some hold reservations or are skeptical [51]. These mixed attitudes indicate the necessity for a more nuanced understanding of the applications and limitations of AI across different research domains [52]. The variability in opinions also suggests that future interventions to promote AI adoption should be targeted and based on factors like field of study, prior experience with AI, and individual perceptions of AI’s capabilities and risks.

Lastly, a majority of participants believe that AI could contribute to more robust research findings, suggesting a collective optimism [53]. However, some skepticism does exist, highlighting the need for research into why these doubts persist. Whether due to ethical concerns or questions about the reliability of AI-generated data, these reservations need to be addressed to facilitate broader acceptance and investment in AI in academic research [54]. Overall, these findings lay the groundwork for more focused studies and educational initiatives aimed at optimizing the integration of AI into academic research processes.

4.2. RQ2: What Are the Potential Ethical Implications of Using AI in Academic Research, and How Can These Be Addressed?

Results indicate that a notable portion of participants is aware of the biases and limitations that AI might introduce in academic research, particularly concerning data selection and algorithmic bias [55]. The issue of interpretability and transparency also came to the forefront, as did “other” unspecified limitations. The identified concerns accentuate the necessity for the creation of ethical guidelines and best practices in the deployment of AI in research settings [56]. It is also vital to focus on transparent and explainable AI methodologies and to investigate any “other” limitations noted by participants to provide a more comprehensive understanding that could inform the development of future AI tools and educational programs.

Majority of the participants employ specific methods like data preprocessing and sourcing from diverse datasets to minimize bias, indicating a proactive approach towards ethical AI usage [56]. These practices show that some in the academic community are already addressing issues of bias proactively [49]. This opens the avenue for best practice guidelines and educational initiatives focused on bias reduction, thereby facilitating more responsible AI use in academia.

Majority of the participants acknowledged the risks of inaccurate or misleading findings associated with AI tools [57]. This highlights an urgent need for further study into the specifics of such risks, whether they stem from algorithmic bias, transparency, or other issues [58]. This could lead to the formulation of more rigorous guidelines or frameworks for evaluating the reliability of AI-based findings in academic research.

Data privacy and security emerged as the predominant ethical concerns, whereas bias and discrimination seemed to be less on the radar for most participants [59, 60]. The pronounced concerns about data privacy necessitate strict data protection protocols and ethical guidelines [49]. At the same time, the lack of focus on bias and discrimination suggests a gap in awareness that needs to be addressed through educational programs. The need for transparent and explainable AI models also deserves emphasis in future guidelines for ethical research conduct using AI.

By taking into account these findings and implications, policymakers, academic institutions, and researchers can better understand the ethical landscape of AI in academic research [47, 48]. This understanding is crucial for making informed decisions on AI tool adoption, crafting guidelines, and establishing educational programs to ensure responsible and effective use of AI in academia.

4.3. Additional Findings

Majority of the participants in the study expect AI technology to be increasingly integrated into academic research within the next decade, suggesting a widespread belief in its ability to enhance efficiency, accuracy, and research outcomes. However, a minority of participants anticipate that AI will find a home primarily in specific disciplines where it offers the most value, rather than becoming universally adopted. This divergence in expectations highlights the need for adaptive strategies. Training programs, education, and guidelines are essential to prepare researchers for an AI-driven research landscape, as noted by Kooli [56]. At the same time, as Albert [55] suggests, identifying and addressing barriers—be it ethical concerns or a lack of understanding—will be critical for broader AI adoption.

Technical expertise, financial considerations, and ethical concerns emerged as key challenges to AI adoption in academic research. Participants identified the lack of skills and training as a significant obstacle, pointing to the need for educational initiatives, as argued by O’dea and O’Dea [61]. Cost and accessibility were also cited, indicating that more affordable and open-source AI solutions, as mentioned by Petitgand et al. [62], might encourage broader adoption. Furthermore, ethical and legal issues cannot be ignored; a framework that addresses concerns like data security, transparency, and bias is vital for responsible AI applications in academic settings.

Participants largely envision a future where the roles of human researchers and AI are complementary and collaborative. This majority view emphasizes the importance of fostering an environment conducive to human–AI interaction, supported by tools and best practices for effective collaboration. On the other hand, a minority believes that AI’s growing capabilities could overshadow human roles, thus necessitating ongoing dialogues about the future of human researchers and potential job displacement, as discussed by Gill et al. [63] and Lee [64]. The development of ethical guidelines is seen by most participants as crucial for ensuring responsible AI use in academic research. This shared perspective underscores the urgency for a standardized ethical framework, as emphasized by Burrell et al. [65], Cabanzo Carreño [66], and Fudge et al. [67]. Concurrently, there is a strong sentiment for enhancing the transparency and interpretability of AI-generated results. This will not only foster greater trust but also align AI applications more closely with ethical and societal values.

5. Discussion

This study is aimed at evaluating academic researchers’ perceptions and ethical implications of AI in academic research, aligned with the TAM. The study revealed a dichotomy. On one hand, there is optimism about AI’s capability to bolster research methodologies—especially in efficiently handling big data. On the other hand, concerns abound, particularly AI’s limitations in interpreting nuanced data, often essential in qualitative research. The majority views AI as a supplementary tool rather than a complete replacement for traditional methods. The rise in AI application is notable, particularly in data-intensive fields like bioinformatics. However, a gap persists between its potential utility and actual usage. Concerns also exist about AI’s ability to introduce bias or miss context, especially when trained on limited datasets. Ethical considerations were pervasive, chiefly centered on data privacy and algorithmic transparency. The call for ethical guidelines and data protection measures indicates a multifaceted concern. Researchers advocate for transparent AI processes to maintain the research credibility and suggest regular audits to keep biases at bay. Despite the promise, actual AI application in research is not as widespread as its perceived benefits. Barriers to adoption are evident, suggesting a need for targeted training and resource allocation. The findings resonate with existing literature. Sun et al. [22] and Zhang and Aslan [26] affirm the transformative potential of AI in education and research, mirroring the study’s optimism. Weigel et al. [29] shed light on the universality of ethical concerns, which also emerge strongly in the findings. Wagner et al. [32] present a balanced viewpoint on AI’s role, echoing the study’s sentiment that AI should supplement, not replace, human judgment in academic research.

This study holds significant relevance for several key stakeholders in the realm of academic research and beyond, given the increasing pervasiveness and impact of AI in various fields of study. Firstly, the findings provide crucial insights for academic researchers who are currently using or planning to integrate AI into their research methodologies. The perception of AI’s potential to enhance efficiency and accuracy in data analysis, and concerns about possible biases, informs researchers about the advantages and challenges of incorporating AI [68, 69]. The findings will enable them to make informed decisions about how to best implement AI in their research practices and avoid potential pitfalls. Secondly, this study is significant for educational institutions and research organizations. By highlighting the gap between the potential and actual usage of AI, this study underscores the need for these organizations to invest in training and resources that enable the effective adoption of AI in research. The ethical concerns around AI usage emphasized in the study also underline the importance of developing and implementing clear ethical guidelines for AI use in research [69]. Thirdly, policymakers and regulatory bodies can also benefit from this study. The highlighted ethical implications of AI usage, such as privacy and data security, can guide these stakeholders in formulating relevant policies and regulations to ensure the responsible use of AI in academic research [70]. Finally, the study is also relevant to AI developers and technology companies. The feedback from academic researchers on their experiences with AI tools can provide valuable user insights that can be used to improve these tools, making them more user-friendly and effective in an academic research context [71].

The significance of this study lies in its ability to provide actionable insights to various stakeholders about the use of AI in academic research. The study fosters a greater understanding of the benefits, challenges, and ethical implications of AI usage, paving the way for more informed decisions, policies, and tools that enable the ethical and effective use of AI in research.

5.1. Conclusion

The study demonstrates a largely positive attitude towards the integration of AI in academic research, reflecting an awareness of its potential to enhance the efficiency, accuracy, and robustness of research outcomes. However, the results also underscore the necessity for academic researchers to handle AI tools responsibly to minimize biases, ensure data privacy, and maintain ethical integrity. Addressing the ethical implications requires clear guidelines, greater transparency in AI-generated results, and regular monitoring of AI use in academic research. Further, the study signals the importance of technical training and accessible AI tools to enable wider adoption in academic research. This study’s findings underline the evolving role of human researchers in a more AI-integrated academic landscape. Rather than replacing human researchers, AI is perceived as a complementary tool, suggesting a future of collaborative interaction between researchers and AI tools. Nonetheless, given the study’s relatively small sample size, further research with broader participation is recommended to validate and extend these findings.

This study on the impact of AI in academic research carries inherent limitations and delimitations that must be considered when interpreting and generalizing the results. Among the limitations is the small sample size of the focus group, which hampers the broad applicability of the findings. There is also the risk of self-selection bias, as participants likely have a vested interest in AI, potentially skewing the sample. Social desirability bias may further distort results, as participants may offer socially acceptable answers instead of their true opinions. Additionally, conducting the study in English could exclude non-English speaking researchers, narrowing the scope of perspectives. On the delimitation front, the study specifically targets AI’s role in academic research, making the findings less relevant for other sectors. Temporal constraints on data collection and analysis may also limit the depth of insights. Lastly, by focusing on specific geographic regions (UAE, Nigeria, and Zimbabwe), the study’s results may not be universally applicable. Despite these factors, the limitations and delimitations will be openly acknowledged in the final interpretation and discussion, and they will serve as catalysts for pinpointing areas requiring further research.

5.2. Recommendations

Based on the findings of this study, the following recommendations are proposed to enhance the integration and ethical use of AI in academic research: Address barriers to adoption: Given that a significant number of researchers are yet to adopt AI tools in their work despite recognizing their potential, there is a need for initiatives that address the barriers to AI adoption. This could include workshops and training programs to build technical expertise among researchers and strategies to improve the affordability and accessibility of AI tools.Data security measures: Considering the significant concern around privacy and data security when using AI in academic research, institutions should prioritize the implementation of robust data protection measures. This might involve anonymizing data, using secure data storage and transmission methods, and ensuring that data is used in compliance with privacy laws and guidelines.Mitigate AI biases: To tackle issues related to data selection and algorithmic biases in AI, researchers and institutions should adopt best practices in data preprocessing and use diverse and representative data sources. Algorithmic transparency and ongoing evaluation of AI tools can also help identify and address potential biases.Ethical guidelines: Academic institutions should lead the development and adoption of clear ethical guidelines and standards for the use of AI in research. These guidelines should address issues such as data privacy, AI transparency, potential biases, and the verification of AI-assisted findings.Promote transparency: There should be an emphasis on improving the transparency and interpretability of AI-generated results. This includes documenting the methodology and parameters of AI tools and ensuring that research findings involving AI can be independently verified.Future research: As the role of AI in academic research continues to evolve, there is a need for ongoing research to monitor its impact, address emerging ethical implications, and guide its responsible use. Future research should also explore how the roles of human researchers are evolving with the increased use of AI and how this can be managed to maximize the benefits of AI without compromising human-centric aspects of academic research.

Data Availability Statement

The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to containing information that could compromise the privacy of research participants.

Conflicts of Interest

The authors declare no conflicts of interest.

Author Contributions

Every named author cocontributed to the design and implementation of the research, to the analysis of the results, and to the writing of the manuscript.

Funding

The authors received no specific funding for this work.