Published on in Vol 13 (2024)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/53616, first published .
Benefits and Risks of AI in Health Care: Narrative Review

Benefits and Risks of AI in Health Care: Narrative Review

Benefits and Risks of AI in Health Care: Narrative Review

Authors of this article:

Margaret Chustecki1 Author Orcid Image

Review

Department of Internal Medicine, Yale School of Medicine, New Haven, CT, United States

Corresponding Author:

Margaret Chustecki, MBA, MD

Department of Internal Medicine

Yale School of Medicine

1952 Whitney Ave

3rd Floor

New Haven, CT, 06510

United States

Phone: 1 2038091700

Email: margaret.chustecki@imgnh.com


Background: The integration of artificial intelligence (AI) into health care has the potential to transform the industry, but it also raises ethical, regulatory, and safety concerns. This review paper provides an in-depth examination of the benefits and risks associated with AI in health care, with a focus on issues like biases, transparency, data privacy, and safety.

Objective: This study aims to evaluate the advantages and drawbacks of incorporating AI in health care. This assessment centers on the potential biases in AI algorithms, transparency challenges, data privacy issues, and safety risks in health care settings.

Methods: Studies included in this review were selected based on their relevance to AI applications in health care, focusing on ethical, regulatory, and safety considerations. Inclusion criteria encompassed peer-reviewed articles, reviews, and relevant research papers published in English. Exclusion criteria included non–peer-reviewed articles, editorials, and studies not directly related to AI in health care. A comprehensive literature search was conducted across 8 databases: OVID MEDLINE, OVID Embase, OVID PsycINFO, EBSCO CINAHL Plus with Full Text, ProQuest Sociological Abstracts, ProQuest Philosopher’s Index, ProQuest Advanced Technologies & Aerospace, and Wiley Cochrane Library. The search was last updated on June 23, 2023. Results were synthesized using qualitative methods to identify key themes and findings related to the benefits and risks of AI in health care.

Results: The literature search yielded 8796 articles. After removing duplicates and applying the inclusion and exclusion criteria, 44 studies were included in the qualitative synthesis. This review highlights the significant promise that AI holds in health care, such as enhancing health care delivery by providing more accurate diagnoses, personalized treatment plans, and efficient resource allocation. However, persistent concerns remain, including biases ingrained in AI algorithms, a lack of transparency in decision-making, potential compromises of patient data privacy, and safety risks associated with AI implementation in clinical settings.

Conclusions: In conclusion, while AI presents the opportunity for a health care revolution, it is imperative to address the ethical, regulatory, and safety challenges linked to its integration. Proactive measures are required to ensure that AI technologies are developed and deployed responsibly, striking a balance between innovation and the safeguarding of patient well-being.

Interact J Med Res 2024;13:e53616

doi:10.2196/53616

Keywords



Artificial intelligence (AI) has rapidly proliferated across various sectors in recent years, with the health care industry emerging as a primary arena for its transformative potential. This technological advancement holds promise for revolutionizing patient care and administrative operations by leveraging vast longitudinal patient data [1]. AI encompasses a spectrum of technologies, including machine learning (ML), natural language processing (NLP), rule-based expert systems (RBES), physical robots, and robotic process automation, each offering unique capabilities from predictive modeling and disease detection to enhancing surgical precision and automating administrative tasks [2-7]. The integration of AI into health care promises heightened diagnostic accuracy, informed decision-making, and optimized treatment planning, thereby potentially reducing medical errors and improving patient outcomes [1].

However, alongside these promising developments, AI adoption in health care is accompanied by significant ethical and regulatory challenges that require careful consideration [8]. Concerns range from safeguarding patient data privacy to addressing algorithmic biases that may perpetuate disparities in health care outcomes [9,10]. The regulatory landscape is evolving to keep pace with technological advancements, aiming to establish robust governance frameworks that ensure the responsible use of AI in health care settings. Furthermore, the advent of pretrained large language models, exemplified by models like BERT (Bidirectional Encoder Representations from Transformers), GPT (Generative Pre-trained Transformer), and their variants, has further expanded the capabilities of AI in health care [11-14]. These models leverage vast amounts of text data to learn rich representations of language, enabling tasks ranging from clinical documentation improvement to automated summarization of medical literature [15,16].

Against this backdrop, this study presents a narrative review aimed at comprehensively exploring the multifaceted role of AI in health care. By synthesizing existing literature, this research aims to provide insights into the diverse applications of AI, its associated benefits, and the ethical and regulatory considerations that underpin its integration into clinical practice [9,10,17]. This review aims to facilitate informed decision-making among health care professionals, policy makers, and researchers, fostering a balanced approach that maximizes the benefits of AI while mitigating potential risks within the health care landscape.

This review seeks to contribute to ongoing discussions on AI ethics, governance, and effective deployment strategies, thereby guiding the responsible and impactful adoption of AI technologies in health care. By examining current trends, challenges, and future directions, this review aims to lay the groundwork for advancing AI’s role in enhancing health care delivery, improving patient outcomes, and supporting health care systems globally.


Overview

This narrative review aims to assess the benefits and risks associated with the integration of AI into health care, with a primary focus on potential biases, transparency issues, data privacy concerns, and safety risks. A literature review was conducted to explore the current landscape of AI applications in health care and to identify relevant ethical, regulatory, and safety considerations.

Eligibility Criteria

Specific inclusion and exclusion criteria were established to guide the selection of studies for this narrative review. Studies were included if they were relevant to the 3 core concepts of AI, ethics, and health and were written in the English language. Articles were excluded if they did not explicitly address each of the core concepts of AI, ethics, and health or if they were not written in English. In addition, studies focusing solely on ethics and big data without explicit mention of AI methods or applications were excluded. Non–peer-reviewed academic literature, such as letters and non–peer-reviewed conference proceedings, as well as books and book chapters, were also excluded as they were deemed irrelevant to this review. No restrictions were applied regarding the publication date or study design to ensure a broad overview of the topic.

Information Sources

The literature search used 8 electronic databases: OVID MEDLINE (1946-present), OVID Embase (1947-present), OVID PsycINFO (1806-present), EBSCO CINAHL Plus with Full Text (1937-present), ProQuest Sociological Abstracts (1952-present), ProQuest Philosopher’s Index (1940-present), ProQuest Advanced Technologies & Aerospace (1962-present), and Wiley Cochrane Library. Search strategies were tailored to each database (Multimedia Appendix 1), using controlled vocabulary, Medical Subject Headings (MeSH) terms, EMTREE terms, American Psychological Association’s Thesaurus of Psychological Index Terms, CINAHL headings, Sociological Thesaurus, Philosopher’s Index subject headings, and Advanced Technologies & Aerospace subject headings. The searches were limited to English language–only articles, and filters excluding animal studies were applied to specific databases. In addition, a filter for health or medicine-related studies was applied to the Advanced Technologies & Aerospace database.

The final searches of the peer-reviewed literature were completed on June 23, 2023. Gray literature was not searched in this narrative review.

Selection and Sources of Evidence

All identified records from the academic literature searches were imported into the reference management software EndNote (Clarivate). After removing duplicate records, screening was conducted in 2 steps: initial title and abstract screening followed by full-text review. Full-text reviews were conducted to ensure that the selected studies provided substantial insights for the narrative synthesis.

Data Charting Process

Data charting forms were developed and refined based on the narrative review research question. The forms included fields for recording data such as the objective of each paper, institutional affiliations of authors, publication year, country of the first and corresponding authors, conflict of interest disclosures, health context of interest, AI applications or technologies discussed, ethical concepts, issues or implications raised, reference to global health, and recommendations for future research, policy, or practice. Data were recorded directly into the data charting form with corresponding page numbers to ensure accuracy.

Synthesis of Results

Data analysis included thematic components. Thematic analysis was conducted inductively, generating open descriptive codes from a sample of records. Codes were applied to relevant data points across all records, with new codes added as needed. These codes were then organized into themes, allowing for the identification of commonalities and gaps in the literature. Results are presented in a narrative format.


Overview

Within the realm of integrating AI into health care, this narrative review has revealed a broad range of insights that span a spectrum of possibilities and challenges. This section categorizes the findings into 2 overarching categories: “Benefits” and “Risks.” Each category encapsulates a tapestry of themes that emerged from an exploration of academic literature. As these themes are explored, the multifaceted landscape of AI’s influence on health care is illuminated. The “Benefits” section unveils the potential for AI to revolutionize health care delivery, ushering in more accurate diagnoses, personalized treatment regimens, and streamlined resource allocation. Conversely, the “Risks” section delves into the intricate ethical, privacy, and safety concerns that accompany the integration of AI into clinical settings. Through a comprehensive examination of these themes, this review provides a nuanced perspective on the implications and imperatives in harnessing AI’s potential for the betterment of health care.

The systematic literature review process, as illustrated in the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flow diagram (Figure 1), outlines a thorough and rigorous methodology. Initial searches across multiple databases—MEDLINE, Embase, PsycINFO, ProQuest Sociological Abstracts, CINAHL, ProQuest Philosopher’s Index, ProQuest Advanced Technologies & Aerospace, and Cochrane Library—yielded a total of 8796 articles. After removing 4798 duplicates using Endnote, 3738 unique records were screened for relevance. Of these, 3155 articles were excluded based on title and abstract review for not meeting the inclusion criteria. The remaining 583 articles underwent full-text assessment for eligibility. Further evaluation led to the exclusion of 539 articles due to various reasons, such as unavailability of full text (n=171), irrelevance to the primary outcome (n=290), non-English language (n=22), not being peer-reviewed (n=29), and not being original research (n=27). Ultimately, 44 studies were included in the qualitative synthesis and data extraction. This meticulous selection process ensured that the final set of studies provided a robust and representative foundation for examining the integration of AI in health care.

Figure 1. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) study selection flow diagram outlining the literature review process when searching for articles on various databases.

Benefits of AI in Health Care

Textbox 1 below describes the main benefits of implementing AI in health care. The benefits are explained in detail below.

Textbox 1. Benefits of artificial intelligence (AI) in health care.

Medical benefits

  • Helps in prediction of various risks and diseases
  • Helps in prevention and control of various diseases
  • Leads to better data-driven decisions within the health care system
  • Assists in improving surgery
  • Supports mental health

Economic and social benefits

  • Reduction in posttreatment expenditures
  • Cost saving through early diagnosis
  • Cost saving with enhanced clinical trials
  • Patient empowerment
  • Relieving medical practitioners’ workload

Medical Benefits

AI adoption in health care offers various medical, economic, and social benefits. This section discusses some of the key medical benefits of AI.

Prediction of Risks and Diseases

AI leverages big data to predict diseases and assess risk exposure among patients. For example, Google collaborates with health delivery networks to develop prediction models that alert clinicians of high-risk illnesses like sepsis and heart failure [18]. ML models can also be used to forecast populations at risk of specific diseases or accidents [19,20]. In addition, AI algorithms, such as deep learning, aid in disease classification and enable more personalized care [21].

Prevention and Control of Diseases

AI can play a significant role in the prevention and control of diseases. For instance, AI can enhance sexually transmitted infection (STI) prevention and control by improving surveillance and intervention. By analyzing publicly available social media data, AI can predict county-level syphilis prevalence, enabling faster and more efficient monitoring [22]. AI can also analyze trends in web data to reduce the stigma associated with STI prevention and care and identify and flag STI-related misinformation [22].

Data-Driven Decision Making

AI enables better data-driven decisions within the health care system. In a digitalized health care environment, the quality of decision-making relies on the availability and accuracy of underlying data [23]. AI can assist in decision-making by offering real-time recommendations based on clinical guidelines or advancements, reducing the likelihood of medical mistakes [24]. For example, IBM Watson Health uses ML to provide clinical decision support and achieved a high level of agreement with physician recommendations [25].

Improvement in Surgery

AI has made significant advancements in surgical procedures. Robotic surgery, such as in gynecologic, prostate, and oral and maxillofacial surgery, enhances surgical precision and predictability [7,26]. Telesurgical techniques driven by AI enable remote surgery and provide better supervision of surgeons [27]. AI-powered surgical mentorship allows skilled surgeons to offer real-time advice and guidance to other surgeons during procedures, improving surgical outcomes [28].

Mental Health Support.

AI use in mental health treatments is growing as patients prefer simple and quick feedback [29]. According to Lovejoy et al [30], psychiatric professionals have historically relied on therapeutic discourse and patient narrative to assess mental health since language is the primary means to communicate our emotional and mental well-being. Recent advancements in AI have opened up new perspectives on the subject by enabling technology to infer emotional meaning from more data sources [30]. According to Habermann [31], with a unique combination of NLP and sentiment analysis, data scientists have developed algorithms to comprehend human emotion from the text. Le Glaz et al [32] mentions that in recent years, NLP models have been used to track mental self-disclosure on Twitter, forecast suicide risk online, and identify suicidal thoughts in clinical notes. These models are used in medicine to give complete details about a patient’s emotional and psychological health [31].

Economic and Social Benefits

In addition to the medical health benefits, using AI in health care has other economic and social advantages, as discussed below.

Reduction in Posttreatment Expenditures.

AI-powered systems can analyze posttreatment result patterns and identify the most effective remedies based on patient profiles. This personalized approach to care can significantly reduce the expenses associated with posttreatment complications, which are a major cost driver in health care systems worldwide [33]. By providing immediate diagnosis and appropriate interventions, AI can help minimize the financial burden of posttreatment complications and lead to substantial cost savings.

Cost Saving Through Early Diagnosis.

AI has demonstrated superior accuracy and speed in analyzing medical images, such as mammograms, leading to the early detection of diseases like breast cancer. By enabling prompt diagnosis and action before issues escalate, AI can help reduce health care costs associated with late-stage diagnoses [28]. In addition, AI’s ability to process and interpret various medical tests, such as computed tomography scans, with high accuracy reduces the likelihood of physician errors, contributing to cost savings.

Cost Saving with Enhanced Clinical Trials.

AI-powered programs can simulate and evaluate numerous potential treatments to predict their effectiveness against various diseases, optimizing the drug development process in clinical trials [34]. By leveraging biomarker monitoring frameworks and analyzing large volumes of patient data, AI accelerates the evaluation of potential treatments, leading to significant cost savings in the development of life-saving medications.

Patient Empowerment.

AI has the potential to empower individuals in managing their health. Wearable devices, such as smartwatches, can collect standard health data, which AI algorithms can analyze to provide personalized health recommendations and warnings for potential diseases [35]. Smartphone apps that use ML algorithms can help patients with chronic diseases better manage their conditions, leading to healthier populations and reduced health care expenses [36].

Relieving Medical Practitioners’ Workload.

AI technologies can alleviate the burden on health care workers by assisting with administrative tasks, data analysis, and image interpretation. AI can automate clerical responsibilities, analyze patient data more efficiently, and aid in diagnosing various medical conditions [37,38]. By reducing manual labor and prioritizing critical cases, AI helps save time and resources for medical practitioners, ultimately leading to increased productivity and improved patient care.

Risks of AI in Health Care

The risks of AI in health care are listed in Textbox 2.

Textbox 2. Risks of artificial intelligence (AI) in health care.

Risks of AI in health care

  • AI diagnosis is not always superior to human diagnosis
  • AI programs may be difficult to understand and overly ambitious
  • Implementation issues
  • Transparency issues and risks with data sharing
  • Biases
  • Mistakes in disease diagnosis or AI cannot be held accountable
  • Data availability and accessibility
  • Regulatory concerns
  • Social challenges
AI Diagnosis Is Not Always Superior to Human Diagnosis.

While AI has the potential to improve accurate diagnosis, it is not always superior to human diagnosis. Early AI systems, such as the MYCIN program developed in the 1970s, showed promise in diagnosing and treating diseases but did not outperform human diagnosticians [39]. These RBES needed better integration with clinical workflows and medical record systems to be practical and effective. In addition, AI models can suffer from overfitting, generating irrelevant correlations between patient characteristics and outcomes, which can lead to incorrect predictions when applied to new cases [40].

Challenges in Understanding and Ambition of AI Programs

Physicians may find it challenging to understand AI programs, particularly in complex domains like cancer diagnosis and treatment. IBM’s Watson program, which combines ML and NLP, garnered attention for its focus on precision medicine. However, integrating Watson into care processes and systems and programming it to handle certain types of cancer has proven difficult [41]. The ambition of AI programs, such as tackling complex cancer therapy, may exceed their current capabilities.

Implementation Issues

Implementing AI in health care faces several challenges. RBES embedded in electronic health care systems are commonly used but may lack the accuracy of algorithmic systems based on ML. These RBES struggle to keep up with evolving medical knowledge and handle large amounts of data [42]. The lack of empirical evidence confirming the efficacy of AI-based treatments in prospective clinical trials hinders successful implementation [43]. Much of the AI research in health care is preclinical and lacks real-world validation [44]. Integration into physician workflow is crucial for successful implementation, but there are limited examples of AI integration into clinical treatment, and training physicians to use AI effectively can be a time-consuming process [45].

Transparency Issues and Risks With Data Sharing

The use of intelligent machines in health care decision-making raises concerns about accountability, transparency, permission, and privacy [2]. Understanding and interpreting AI systems, such as deep learning algorithms used in image analysis, can be challenging [2]. Physicians who lack comprehension of the inner workings of AI models may struggle to communicate the medical treatment process to patients [46]. Increased reliance on AI may lead to automated decision-making, limiting the contact and communication between health care workers and patients [46].

The rapid emergence of new technologies in health care has sparked skepticism due to the risks associated with data sharing [17]. There is a need for public norms that ensure data governance and openness, as well as improve patient understanding of how and why data are used [17]. Concerns about privacy violations arise from the collection of large data sets and the potential for AI to anticipate personal information [47]. Patients may perceive this as a violation of their privacy, especially if the findings are made public to third parties [48].

Respecting patient confidentiality and acquiring informed consent for data use are ethically required [49]. AI systems should be protected from privacy breaches to prevent psychological and reputational harm to patients [49]. Recent incidents, such as the misuse of Facebook personal data by Cambridge Analytica and the sharing of patient data without explicit consent by the Royal Free London NHS Foundation Trust, have raised concerns about privacy violations [49,50].

Biases

ML systems in health care can be prone to algorithmic bias, leading to predictions based on noncausal factors like gender or ethnicity [51]. Prejudice and inequality are among the risks associated with health care AI [28]. Biases present in the data used to train AI systems can result in inaccurate outcomes, especially if certain races or genders are underrepresented [28]. Unrepresentative data can further perpetuate health inequities and lead to risk underestimation or overestimation in specific patient populations [52].

Mistakes in Disease Diagnosis and Lack of Accountability

AI systems can make mistakes in patient diagnosis and treatment, creating potential harm [28]. Holding AI systems accountable can be challenging, as liability concerns arise regarding errors and the allocation of responsibility [53]. The lack of explanation from deep learning algorithms can hinder both legal accountability and scientific understanding, potentially eroding patients’ trust in the system [54].

Determining accountability for AI failures is an ongoing challenge, as holding the physician accountable may seem unjust, while holding the developer accountable may be too removed from the clinical setting [24]. The question of who should be held accountable when AI systems fail remains to be answered [24].

Data Availability and Accessibility

Large amounts of data from various sources are required to train AI algorithms in health care [55]. However, accessing health data can be challenging due to fragmentation across different platforms and systems [55]. Data availability in health care is limited, and there is often a reluctance to share data between hospitals [56]. The continuous availability of data for ongoing improvement of ML-based systems can be difficult due to organizational resistance [57]. Technological advancements and improved algorithms can help address the problem of limited data sets [57].

Regulatory Concerns

The autolearn feature of AI software poses regulatory challenges as algorithms evolve continuously with use [58]. This creates the need for additional policies and procedures to ensure patient safety [58]. Many countries have yet to formalize regulatory guidelines for assessing AI algorithmic safety, which can hinder AI adoption and lead to risky practices [59]. The lack of industry rules on the ethical usage of AI in health care further complicates the accountability issue [60]. Efforts by the Food and Drug Administration and National Health Service to establish guidelines and standards are ongoing but pose barriers to regulatory approval [60,61].

Social Challenges

Misconceptions about AI replacing health care jobs lead to skepticism and aversion to AI-based interventions [43]. However, the arrival of AI does not necessarily mean job obsolescence but rather job reengineering [62]. Overcoming skepticism and fostering trust in AI requires a better understanding of its capabilities and meaningful public discourse [62]. Improving public and health care professionals’ understanding of AI is essential to managing expectations and addressing concerns.


Principal Findings

This narrative review delves into the dynamic landscape of AI integration in health care, aiming to uncover a spectrum of perspectives, concerns, and opportunities. This exploration encompasses a diverse range of health care settings from different countries and regions, unveiling a rich tapestry of AI adoption. Overall, AI offers tremendous potential and will continue to play a crucial role in future health care decisions. If AI is successfully used, it can reduce pressure on health care workers while improving work quality by lowering mistakes and improving precision. It has the potential to give people more control over their health decisions and can lower avoidable hospitalizations. It can broaden the scope of medical knowledge and build on the present clinical guidelines. Given its advantages and capability to drive the development of precision medicine, it is universally acknowledged to be a much-needed enhancement in medicine. AI is anticipated to eventually master most of the essential domains within health care.

However, there are some difficulties associated with incorporating AI in health care. Acquiring enough data to train precise algorithms is a continual effort that necessitates a shift in attitude towards data sharing that promotes technical advancement. Specific guidelines on how to securely adopt and evaluate AI technology and research on AI’s potential and limitations are required. Robust research is also required to empirically demonstrate the benefits of AI use in the actual world. While the perfect conditions for successful AI adoption may not yet exist, there is still room for AI advancement in health care.

Given that AI has tremendous potential and is the future, there are a few crucial points to consider when using AI in health care. First, given the need for more general agreement in AI governance, it may be impossible to develop AI-based systems whose algorithms can be generalized across all health care domains. As a result, it may be wise to concentrate on systems that can be implemented and used effectively in the health care institutions for which they were designed. Fundamentally, patient care must take priority over the excitement of cutting-edge technology. The AI system’s safety and competence must be weighed for use only when appropriate and valuable to patients.

Second, AI in health care must still be complemented with human input. Although AI has advantages in speed and accuracy, physicians are still needed for more cognitively complex or psychological elements and activities. Similarly, although the detection and monitoring of vital disease symptoms are now automated, the objective behind AI is not to eliminate physician input but to focus their expertise on areas where they are more necessary and on what computers cannot and may never imitate. Therefore, focusing on developing complementarity between the use of AI and physicians by training is essential.

In addition, while it is critical to lower expectations, it is also critical not to be excessively gloomy about the role of AI in health care. While physicians may need to comprehend the processes of AI algorithms, most physicians understand magnetic resonance imaging or computed tomography to some level. Despite a lack of individual physician comprehension of their specific process, these studies are extensively used. The lack of transparency in ML algorithms may thus be tolerable if the algorithm’s efficacy can be demonstrated. Again, this can be achieved with training and familiarizing the physician with the AI system.

Rather than putting AI to a standard of either perfect or nil results, one should compare the outcomes of using AI to those of the natural world, where physicians can and will make mistakes. Importantly, AI is dynamic in nature and can improve using more extensive data sets. As a result, it is entirely possible that the combined usage of physician and AI input would be more successful over time. However, it is critical not to overstate the status of AI. Its implementation in health care will be a careful, deliberate, and progressive process, including strict control and monitoring of its use and efficacy. AI can help patients and increase the quality of care when combined with input and oversight from health care professionals. AI systems will not wholly replace human clinicians but will supplement their efforts to care for patients. Human therapists may eventually shift towards jobs and job designs that require distinctly human skills, such as enthusiasm and knowledge to use AI in health care.

As global communities live longer lives and the prevalence of chronic disease rises, the rising cost of health care will remain a hot topic among health care stakeholders. It is time to seek the assistance of machines as they can potentially reduce economic costs. Furthermore, coordination between government and private sector industry partners is vital to realize this potential and take advantage of AI’s full potential in health care delivery.

With all this, the key challenge for AI in many sectors of health care is ensuring its adoption in daily clinical practice rather than whether the technologies will be equipped to be effective. AI systems must be approved by regulators, linked with electronic health record systems, standardized to the point that similar products perform similarly, taught to medical practitioners, paid for by public or private payer organizations, and modified in the field over time for widespread adoption to occur shortly. Since AI has a significant and lasting impact on lives and is the future of health care, it is essential to address the associated concerns. Given its importance, AI needs proper policy guidelines and regulations regarding its usage in health care to reap its maximum benefits.

Comparison With Previous Literature

In comparing the findings of this review with existing literature, several key similarities and differences emerge. This review aligns with Gazquez-Garcia et al [63] and Mooghali et al [64] in highlighting the crucial role of health care professional training for effective AI integration. Both emphasize the need for proficiency in AI fundamentals, data analytics, and ethical considerations, reinforcing the notion that successful AI adoption requires a well-prepared workforce. The review also echoes Sapci and Sapci’s [65] advocacy for incorporating AI education into medical curricula to address future challenges.

However, this review diverges in its emphasis on practical AI implementation challenges. While Moghadasi et al [66] and Muley et al [67] discuss the risks associated with AI, including the need for enhanced transparency and stakeholder collaboration, this review adds a nuanced perspective on balancing AI’s potential benefits with its ethical risks. For instance, this review highlights the importance of human oversight and the complementarity of AI with clinician expertise, which aligns with Morley et al [68] and Zhang and Zhang [69] but also offers additional insights into practical implementation issues not fully covered in the previous reviews.

In terms of public perception, this review supports Kerstan [70] and Castagno and Khalifa [71] by acknowledging that trust in AI is influenced by knowledge and transparency. However, it further explores the dynamic interaction between AI’s promise and the necessity for rigorous validation and ethical governance, as discussed by Macrae [72] and Tulk Jesso et al [73]. This review underscores that while AI has the potential to revolutionize health care, its integration must be handled with careful consideration of both practical and ethical dimensions to achieve meaningful improvements in patient care and outcomes.

Strengths and Limitations

First, the generalizability of the findings may be affected by the inherent variations in study methodologies, AI implementations, and health care settings across different regions. This heterogeneity introduces variability that could influence the applicability of the conclusions. To mitigate this limitation, rigorous search strategies were used across multiple databases to include a diverse range of studies. Future reviews could benefit from incorporating more standardized studies to enhance generalizability. Second, the reliance on published literature from electronic databases introduces potential publication bias. Studies with positive outcomes related to AI integration in health care may be more likely to be published, which could skew perceptions of AI effectiveness and adoption rates. Efforts were made to address this bias by including a broad range of databases and emphasizing recent literature. Future research should aim to include unpublished studies or grey literature to provide a more balanced view. Third, the rapid evolution of AI technologies means that newer developments and implementations may not have been fully captured in this review. The review focused on the most current literature available at the time of the search to address this issue. Regular updates will be necessary to incorporate the latest advancements and ensure the review remains relevant. In addition, the absence of details around stakeholder engagement could have enriched the study by providing additional depth and perspective. Engaging stakeholders in such a dynamic field would offer diverse viewpoints and further validate the conclusions. Future research should consider incorporating stakeholder engagement to enhance the robustness and applicability of the findings.

Despite these limitations, this review offers several notable strengths. It provides a comprehensive overview of AI integration in health care, leveraging rigorous search strategies across multiple databases to ensure a diverse and current collection of literature. This approach contributes to a nuanced understanding of AI’s potential and limitations. Furthermore, the emphasis on recent developments helps ensure that the review reflects the most current trends and advancements in AI technologies.

Future Directions

Moving forward, further research in the field of AI integration in health care should address several key areas to advance understanding and application. First, studies should prioritize incorporating stakeholder engagement, including health care providers, patients, policymakers, and technology developers, to provide diverse perspectives on AI adoption and implementation strategies, enhancing relevance and acceptance in clinical practice. Second, longitudinal studies are crucial to assess the long-term impacts of AI technologies in health care settings, providing insights into sustainability, scalability, and real-world effectiveness over time. Third, comprehensive research focusing on the ethical implications of AI, including data privacy, algorithm bias, patient consent, and regulatory frameworks, is needed to build trust and ensure responsible deployment. In addition, comparative effectiveness research comparing AI-assisted interventions with standard care protocols can provide evidence of AI’s impact on clinical outcomes, patient safety, and health care efficiency. Interdisciplinary collaboration between computer scientists, health care professionals, social scientists, and ethicists is essential to foster innovative approaches aligned with health care needs. Education and training programs for health care professionals on AI technologies will ensure proficiency in interpreting AI-generated insights and integrating them into patient care effectively. Finally, research should explore how AI can reduce health care disparities and improve access to quality care, particularly in underserved communities and low-resource settings. Addressing these priorities will realize AI’s potential in transforming health care delivery and improving patient outcomes globally.

Conclusions

In summary, AI presents a transformative force in health care, with the potential to enhance patient care, reduce errors, and broaden medical knowledge. However, its successful integration requires adaptability; complementarity with human expertise; transparency; and a deliberate, incremental approach. AI’s impact on health care is evolutionary, not revolutionary, and collaboration between stakeholders, standardization, education, and robust policies are essential to harness its full potential while upholding patient-centric care and innovation.

Data Availability

All data generated or analyzed during this study are included in this published paper and its supplementary information files.

Authors' Contributions

As the sole author of this manuscript, MC was responsible for all aspects of the study, including conceptualization, literature review, writing, and editing.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Detailed search strategy across databases for artificial intelligence (AI) integration in health care.

DOCX File , 14 KB

  1. Ghafur S, van Dael J, Leis M, Darzi A, Sheikh A. Public perceptions on data sharing: key insights from the UK and the USA. Lancet Digit Health. 2020;2(9):e444-e446. [FREE Full text] [CrossRef] [Medline]
  2. Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. 2019;6(2):94-98. [FREE Full text] [CrossRef] [Medline]
  3. Lee SI, Celik S, Logsdon BA, Lundberg SM, Martins TJ, Oehler VG, et al. A machine learning approach to integrate big data for precision medicine in acute myeloid leukemia. Nat Commun. 2018;9(1):42. [FREE Full text] [CrossRef] [Medline]
  4. Sordo M. Introduction to Neural Networks in Healthcare. In: Open Clinical Knowledge Management for Medical Care. Boston, MA. Margarita Sordo; 2002.
  5. Fakoor R, Ladhak F, Nazi A, Huber M. Using deep learning to enhance cancer diagnosis and classification. 2013. Presented at: The 30th International Conference on Machine Learning (ICML 2013), WHEALTH Workshop; June 16-21, 2023; Atlanta, GA. URL: https:/​/www.​researchgate.net/​publication/​281857285_Using_deep_learning_to_enhance_cancer_diagnosis_and_classification
  6. Vial A, Stirling D, Field M, Ros M, Ritz C, Carolan M, et al. The role of deep learning and radiomic feature extraction in cancer-specific predictive modelling: a review. Transl Cancer Res. 2018;7(3):803-816. [CrossRef]
  7. Davenport TH, Glaser J. Just-in-time delivery comes to knowledge management. Harv Bus Rev. 2002;80(7):107-11, 126. [Medline]
  8. Quinn TP, Senadeera M, Jacobs S, Coghlan S, Le V. Trust and medical AI: the challenges we face and the expertise needed to overcome them. J Am Med Inform Assoc. 2021;28(4):890-894. [FREE Full text] [CrossRef] [Medline]
  9. Albahri AS, Duhaim AM, Fadhel MA, Alnoor A, Baqer NS, Alzubaidi L, et al. A systematic review of trustworthy and explainable artificial intelligence in healthcare: assessment of quality, bias risk, and data fusion. Information Fusion. 2023;96:156-191. [CrossRef]
  10. Wesson P, Hswen Y, Valdes G, Stojanovski K, Handley MA. Risks and opportunities to ensure equity in the application of big data research in public health. Annu Rev Public Health. 2022;43:59-78. [FREE Full text] [CrossRef] [Medline]
  11. Powezka K, Slater L, Wall M, Gkoutos G, Juszczak M. Source of data for artificial intelligence applications in vascular surgery - a scoping review. medRxiv. Preprint posted online on October 4, 2023. [CrossRef]
  12. Gaviria-Valencia S, Murphy SP, Kaggal VC, McBane Ii RD, Rooke TW, Chaudhry R, et al. Near real-time natural language processing for the extraction of abdominal aortic aneurysm diagnoses from radiology reports: algorithm development and validation study. JMIR Med Inform. 2023;11:e40964. [FREE Full text] [CrossRef] [Medline]
  13. Liu W, Zhang X, Lv H, Li J, Liu Y, Yang Z, et al. Using a classification model for determining the value of liver radiological reports of patients with colorectal cancer. Front Oncol. 2022;12:913806. [FREE Full text] [CrossRef] [Medline]
  14. Yang X, Chen A, PourNejatian N, Shin HC, Smith KE, Parisien C, et al. A large language model for electronic health records. NPJ Digit Med. 2022;5(1):194. [FREE Full text] [CrossRef] [Medline]
  15. Bhayana R. Chatbots and large language models in radiology: a practical primer for clinical and research applications. Radiology. 2024;310(1):e232756. [CrossRef] [Medline]
  16. Hu Y, Chen Q, Du J, Peng X, Keloth VK, Zuo X, et al. Improving large language models for clinical named entity recognition via prompt engineering. J Am Med Inform Assoc. 2024;31(9):1812-1820. [CrossRef] [Medline]
  17. Muehlematter UJ, Daniore P, Vokinger KN. Approval of artificial intelligence and machine learning-based medical devices in the USA and Europe (2015-20): a comparative analysis. Lancet Digit Health. 2021;3(3):e195-e203. [FREE Full text] [CrossRef] [Medline]
  18. Rysavy M. Evidence-based medicine: a science of uncertainty and an art of probability. Virtual Mentor. 2013;15(1):4-8. [FREE Full text] [CrossRef] [Medline]
  19. Rajkomar A, Oren E, Chen K, Dai AM, Hajaj N, Hardt M, et al. Scalable and accurate deep learning with electronic health records. NPJ Digit Med. 2018;1:18. [FREE Full text] [CrossRef] [Medline]
  20. Shimabukuro DW, Barton CW, Feldman MD, Mataraso SJ, Das R. Effect of a machine learning-based severe sepsis prediction algorithm on patient survival and hospital length of stay: a randomised clinical trial. BMJ Open Respir Res. 2017;4(1):e000234. [FREE Full text] [CrossRef] [Medline]
  21. Cheng JZ, Ni D, Chou YH, Qin J, Tiu CM, Chang YC, et al. Computer-aided diagnosis with deep learning architecture: applications to breast lesions in US images and pulmonary nodules in CT scans. Sci Rep. 2016;6:24454. [FREE Full text] [CrossRef] [Medline]
  22. Young SD, Crowley JS, Vermund SH. Artificial intelligence and sexual health in the USA. Lancet Digit Health. 2021;3(8):e467-e468. [FREE Full text] [CrossRef] [Medline]
  23. Madsen LB. Data-Driven Healthcare: How Analytics and BI are Transforming The Industry. Hoboken, NJ. Wiley Publishing; 2014.
  24. Aung YYM, Wong DCS, Ting DSW. The promise of artificial intelligence: a review of the opportunities and challenges of artificial intelligence in healthcare. Br Med Bull. 2021;139(1):4-15. [CrossRef] [Medline]
  25. Jones LD, Golan D, Hanna S, Ramachandran M. Artificial intelligence, machine learning and the evolution of healthcare: A bright future or cause for concern? Bone and Joint Research. 2018;7(3):223-225. [FREE Full text]
  26. Hashimoto DA, Ward TM, Meireles OR. The role of artificial intelligence in surgery. Adv Surg. 2020;54:89-101. [CrossRef] [Medline]
  27. Akbar Safav A, Fekri P, Setoodeh P, Zadeh MH. Toward deep secure tele-surgery system. 2018. Presented at: The 16th International Conference on Scientific Computing (CSC'18); October 13, 2024; Las Vegas, NV. URL: https://www.researchgate.net/publication/346502758_Toward_Deep_Secure_Tele-surgery_System
  28. Shaheen MY. AI in healthcare: medical and socioeconomic benefits and challenges. OSF. Preprint posted online on September 28, 2021. [CrossRef]
  29. Luxton DD. An introduction to artificial intelligence in behavioral and mental health care. In: Artificial Intelligence in Behavioral and Mental Health Care. London, United Kingdom. Elsevier Academic Press; 2016:1-26.
  30. Lovejoy CA, Buch V, Maruthappu M. Technology and mental health: the role of artificial intelligence. Eur Psychiatry. Jan 2019;55:1-3. [CrossRef] [Medline]
  31. Habermann J. Language and psycho-social well-being. Knowledge Common Works. Preprint posted online in 2021. [CrossRef]
  32. Le Glaz A, Haralambous Y, Kim-Dufor DH, Lenca P, Billot R, Ryan TC, et al. Machine learning and natural language processing in mental health: systematic review. J Med Internet Res. 2021;23(5):e15708. [FREE Full text] [CrossRef] [Medline]
  33. Nguyen LT, Do TTH. Artificial Intelligence in Healthcare: A New Technology Benefit for Patients and Doctors. 2019. Presented at: Proceedings of the Portland International Conference on Management of Engineering and Technology (PICMET); August 25-29, 2019:1-15; Portland, OR. [CrossRef]
  34. Beck JT, Rammage M, Jackson GP, Preininger AM, Dankwa-Mullan I, Roebuck MC, et al. Artificial intelligence tool for optimizing eligibility screening for clinical trials in a large community cancer center. JCO Clin Cancer Inform. 2020;4:50-59. [FREE Full text] [CrossRef] [Medline]
  35. Ichikawa D, Saito T, Ujita W, Oyama H. How can machine-learning methods assist in virtual screening for hyperuricemia? A healthcare machine-learning approach. J Biomed Inform. 2016;64:20-24. [FREE Full text] [CrossRef] [Medline]
  36. Vollmer S, Mateen BA, Bohner G, Király FJ, Ghani R, Jonsson P, et al. Machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness. BMJ. 2020;368:l6927. [FREE Full text] [CrossRef] [Medline]
  37. Verghese A, Shah NH, Harrington RA. What this computer needs is a physician: humanism and artificial intelligence. JAMA. 2018;319(1):19-20. [CrossRef] [Medline]
  38. Dilsizian SE, Siegel EL. Artificial intelligence in medicine and cardiac imaging: harnessing big data and advanced computing to provide personalized medical diagnosis and treatment. Curr Cardiol Rep. 2014;16(1):441. [CrossRef] [Medline]
  39. Bush J. How AI is taking the scut work out of health care. Harvard Business Review. Mar 5, 2018. URL: https://hbr.org/2018/03/how-ai-is-taking-the-scut-work-out-of-health-care [accessed 2024-10-04]
  40. Wiens J, Shenoy ES. Machine learning for healthcare: on the verge of a major shift in healthcare epidemiology. Clin Infect Dis. 2018;66(1):149-153. [FREE Full text] [CrossRef] [Medline]
  41. Swetlitz L, Ross C. IBM pitched its Watson supercomputer as a revolution in cancer care. It’s nowhere close. STAT. Sep 5, 2017. URL: https://www.statnews.com/2017/09/05/watson-ibm-cancer/ [accessed 2017-09-05]
  42. Davenport TH. The AI Advantage: How to Put the Artificial Intelligence Revolution to Work. Cambridge, MA. The MIT Press; 2018.
  43. Sun TQ, Medaglia R. Mapping the challenges of artificial intelligence in the public sector: evidence from public healthcare. Government Information Quarterly. 2019;36(2):368-383. [CrossRef]
  44. Fogel AL, Kvedar JC. Artificial intelligence powers digital medicine. NPJ Digit Med. 2018;1:5. [FREE Full text] [CrossRef] [Medline]
  45. Stewart J, Sprivulis P, Dwivedi G. Artificial intelligence and machine learning in emergency medicine. Emerg Med Australas. 2018;30(6):870-874. [CrossRef] [Medline]
  46. Vayena E, Blasimme A, Cohen IG. Machine learning in medicine: Addressing ethical challenges. PLoS Med. 2018;15(11):e1002689. [CrossRef]
  47. Marwan M, Kartit A, Ouahmane H. Security enhancement in healthcare cloud using machine learning. Procedia Computer Science. 2018;127:388-397. [CrossRef]
  48. van der Schaar M, Alaa AM, Floto A, Gimson A, Scholtes S, Wood A, et al. How artificial intelligence and machine learning can help healthcare systems respond to COVID-19. Mach Learn. 2021;110(1):1-14. [FREE Full text] [CrossRef] [Medline]
  49. Dawson D, Schleiger E, McLaughlin J, Robinson C, Quezada G, Scowcroft J, et al. Artificial Intelligence Australia’s Ethics Framework A Discussion Paper. Eveleigh, Australia. Data 61 CSIRO; 2019.
  50. Powles J, Hodson H. Google deepMind and healthcare in an age of algorithms. Health Technol (Berl). 2017;7(4):351-367. [FREE Full text] [CrossRef] [Medline]
  51. Davenport TH, Dreyer KJD. AI will change radiology, but it won't replace radiologists. Harvard Business Publishing Education. 2018. URL: https://hbsp.harvard.edu/search?action=&author=Keith+J+Dreyer+DO&activeTab=products [accessed 2024-10-08]
  52. Angwin J, Larson J, Mattu S, Kirchner L. Machine bias. In: Martin K, editor. Ethics of Data and Analytics. New York, NY. Auerbach Publications; 2022.
  53. Reddy S, Fox J, Purohit MP. Artificial intelligence-enabled healthcare delivery. J R Soc Med. 2019;112(1):22-28. [FREE Full text] [CrossRef] [Medline]
  54. Wang F, Preininger A. AI in health: state of the art, challenges, and future directions. Yearb Med Inform. 2019;28(1):16-26. [FREE Full text] [CrossRef] [Medline]
  55. Jianying Hu APFW. Data-driven analytics for personalized healthcare. Healthcare Information Management Systems. 2016:529-554. [FREE Full text]
  56. Johnson KW, Torres Soto J, Glicksberg BS, Shameer K, Miotto R, Ali M, et al. Artificial intelligence in cardiology. J Am Coll Cardiol. 2018;71(23):2668-2679. [FREE Full text] [CrossRef] [Medline]
  57. Lopez K, Fodeh SJ, Allam A, Brandt CA, Krauthammer M. Reducing annotation burden through multimodal learning. Front Big Data. 2020;3:19. [FREE Full text] [CrossRef] [Medline]
  58. Deciding when to submit a 510(k) for a software change to an existing device. Food and Drug Administration. Oct 2017. URL: https:/​/www.​fda.gov/​regulatory-information/​search-fda-guidance-documents/​deciding-when-submit-510k-software-change-existing-device [accessed 2024-11-06]
  59. Parikh RB, Obermeyer Z, Navathe AS. Regulation of predictive analytics in medicine. Science. 2019;363(6429):810-812. [CrossRef]
  60. Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2(4):230-243. [FREE Full text] [CrossRef] [Medline]
  61. Ramesh AN, Kambhampati C, Monson JRT, Drew PJ. Artificial intelligence in medicine. Ann R Coll Surg Engl. 2004;86(5):334-338. [FREE Full text] [CrossRef] [Medline]
  62. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44-56. [CrossRef] [Medline]
  63. Gazquez-Garcia J, Sánchez-Bocanegra CL, Sevillano JL. Artificial intelligence in the health sector: key skills for future health professionals. JMIR Preprints. Preprint posted online on March 7, 2024. [CrossRef]
  64. Mooghali M, Stroud AM, Yoo DW, Barry BA, Grimshaw AA, Ross JS, et al. Barriers and facilitators to trustworthy and ethical AI-enabled medical care from patient's and healthcare provider's perspectives: a literature review. medRxiv. Preprint posted on online on October 2, 2023. [CrossRef]
  65. Sapci AH, Sapci HA. Artificial intelligence education and tools for medical and health informatics students: systematic review. JMIR Med Educ. 2020;6(1):e19285. [FREE Full text] [CrossRef] [Medline]
  66. Moghadasi N, Valdez RS, Piran M, Moghaddasi N, Linkov I, Polmateer TL, et al. Risk analysis of artificial intelligence in medicine with a multilayer concept of system order. Systems. 2024;12(2):47. [CrossRef]
  67. Muley A, Muzumdar P, Kurian G, Basyal GP. Risk of AI in healthcare: a comprehensive literature review and study framework. AJMAH. 2023;21(10):276-291. [CrossRef] [Medline]
  68. Morley J, Machado CCV, Burr C, Cowls J, Joshi I, Taddeo M, et al. The ethics of AI in health care: a mapping review. SSRN Journal. 2020. [CrossRef]
  69. Zhang J, Zhang ZM. Ethics and governance of trustworthy medical artificial intelligence. BMC Med Inform Decis Mak. 2023;23(1):7-7. [CrossRef]
  70. Kerstan S, Bienefeld N, Grote G. Choosing human over AI doctors? how comparative trust associations and knowledge relate to risk and benefit perceptions of AI in healthcare. Risk Anal. 2024;44(4):939-957. [CrossRef] [Medline]
  71. Castagno S, Khalifa M. Perceptions of artificial intelligence among healthcare staff: a qualitative survey study. Front Artif Intell. 2020;3:578983. [FREE Full text] [CrossRef] [Medline]
  72. Macrae C. Governing the safety of artificial intelligence in healthcare. BMJ Qual Saf. 2019;28(6):495-498. [CrossRef] [Medline]
  73. Tulk Jesso S, Kelliher A, Sanghavi H, Martin T, Henrickson Parker S. Inclusion of clinicians in the development and evaluation of clinical artificial intelligence tools: a systematic literature review. Front Psychol. 2022;13:830345. [FREE Full text] [CrossRef] [Medline]


AI: artificial intelligence
BERT: Bidirectional Encoder Representations from Transformers
GPT: Generative Pre-trained Transformer
MeSH: Medical Subject Headings
ML: machine learning
NLP: natural language processing
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses
RBES: rule-based expert systems
STI: sexually transmitted infection


Edited by T de Azevedo Cardoso; submitted 12.10.23; peer-reviewed by J Walsh, B Delaney, W Yang; comments to author 19.03.24; revised version received 17.06.24; accepted 19.09.24; published 18.11.24.

Copyright

©Margaret Chustecki. Originally published in the Interactive Journal of Medical Research (https://www.i-jmr.org/), 18.11.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Interactive Journal of Medical Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.i-jmr.org/, as well as this copyright and license information must be included.