Review
Abstract
Background: Artificial intelligence (AI) is increasingly integrated into palliative medicine, offering opportunities to improve quality, efficiency, and patient-centeredness in end-of-life care. However, its use raises complex ethical issues, including privacy, equity, dehumanization, and decision-making dilemmas.
Objective: We aim to critically analyze the main ethical implications of AI in end-of-life palliative care and examine the benefits and risks. We propose strategies for ethical and responsible implementation.
Methods: We conducted an integrative review of studies published from 2020 to 2025 in English, Portuguese, and Spanish, identified through systematic searches in PubMed, Scopus, and Google Scholar. Inclusion criteria were studies addressing AI in palliative medicine focusing on ethical implications or patient experience. Two reviewers independently performed study selection and data extraction, resolving discrepancies by consensus. The quality of the papers was assessed using the Critical Appraisal Skills Programme checklist and the Hawker et al tool.
Results: Six key themes emerged: (1) practical applications of AI, (2) communication and AI tools, (3) patient experience and humanization, (4) ethical implications, (5) quality of life perspectives, and (6) challenges and limitations. While AI shows promise for improving efficiency and personalization, consolidated real-world examples of efficiency and equity remain scarce. Key risks include algorithmic bias, cultural insensitivity, and the potential for reduced patient autonomy.
Conclusions: AI can transform palliative care, but implementation must be patient-centered and ethically grounded. Robust policies are needed to ensure equity, privacy, and humanization. Future research should address data diversity, social determinants, and culturally sensitive approaches.
doi:10.2196/73517
Keywords
Introduction
Background
The review focused on studies published between 2020 and 2025 to capture the most recent advances in artificial intelligence (AI) technologies and their application in clinical practice, as the field has evolved rapidly in the last 5 years [
]. This approach ensures the relevance of findings to current and emerging ethical challenges.AI is a subject within computer science that develops systems capable of performing tasks that simulate human capabilities, such as learning, reasoning, and decision-making. Machine learning (ML) allows algorithms to learn from data without explicit programming within this field. At the same time, deep learning, a subset of ML, uses advanced neural networks to analyze large volumes of information and generate accurate predictions.
AI significantly transforms the traditional health care paradigm toward an evidence-based and patient-centered model. Its application in areas such as the anticipation of complications, the personalization of treatments, and the optimization of resources has proven to be a key catalyst for improving the quality and efficiency of medical care [
].Palliative medicine has also begun to benefit from the transformative potential of these technologies. This type of care, aimed at patients with advanced or terminal illnesses, seeks to alleviate physical, emotional, and spiritual suffering while seeking to improve quality of life and promote dignity in the final moments [
]. Palliative care encompasses a wide range of conditions, including advanced-stage oncological diseases (metastatic lung, breast, or pancreatic cancer) and nononcological illnesses such as neurodegenerative disorders (amyotrophic lateral sclerosis and late-stage Parkinson), end-stage organ failures (heart, lung, or renal disease), and severe respiratory conditions (chronic obstructive pulmonary disease). These patients, regardless of their specific diagnosis, share everyday needs: symptom relief, emotional support, and dignity preservation as they approach the end of life. The integration of AI in this sensitive context must, therefore, address the heterogeneity of these conditions while upholding ethical principles.AI in palliative medicine includes tools, such as predictive models, to identify specific needs, wearable devices to monitor symptoms in real time, and virtual assistants that facilitate communication between patients, carers, and professionals. These innovations promise to improve clinical outcomes and enrich the patient experience by offering more personalized approaches [
]. However, its implementation poses significant ethical challenges due to the inherent vulnerability of patients and the complexity of end-of-life decisions. Thus, when we apply AI in palliative care, we must ensure that these tools do not reduce patients to mere actionable data but reinforce their humanity and dignity, honoring their individuality and right to compassionate and ethically informed care. Furthermore, it is crucial to consider how these technologies may affect human dignity and avoid a possible dehumanization of care [ ]. In response to these concerns, various institutions have developed ethical guidelines to evaluate and regulate the responsible use of AI-based systems in sensitive contexts.Despite enthusiasm for AI’s transformative potential, significant barriers to its widespread adoption in clinical practice remain. The lack of clear regulatory frameworks and consolidated examples of success highlights the urgent need for integrative research that addresses both the opportunities and the ethical and practical limitations of using AI in palliative care. As Miralles [
] points out in 2023, although multiple promising areas for applying AI in health care have been identified, few consolidated cases have achieved effective adoption in real clinical environments.In recent years, various ethical self-assessment tools have been developed to verify the suitability of a system based on different ethical principles. In response to these concerns, various institutions have developed ethical guidelines to assess and regulate the responsible use of AI-based systems in sensitive contexts, such as the Ethical Guidelines for Trustworthy AI, the Draft Recommendation on the Ethics of Artificial Intelligence, and [
] the Barcelona Declaration [ ]. Also, the AI Ethical Impact Group: From Principles to Practice [ ], Technical Methods for Regulatory Inspection of Algorithmic Systems on Social Networking Platforms [ ], or the Organisation for Economic Co-Operation and Development (OECD) of AI Systems Classification Framework [ ], among others.In line with international frameworks such as the Institute of Medicine (IOM) and the OECD, this review adopts a multidimensional understanding of “quality of care,” which includes safety, effectiveness, patient-centeredness, timeliness, efficiency, and equity as interrelated domains. While “efficiency” is thus an integral component of overall quality, for analytical clarity, we will at times refer to efficiency as system-level performance (resource optimization and process automation) and quality as patient-centered outcomes (dignity and symptom relief). This distinction, while recognizing their overlap, allows us to examine the specific effects of AI on both system operations and patient experience in palliative care.
Theoretical Justification
Ethical reflection on palliative care and AI is rooted in classical and contemporary philosophy.
In his Nicomachean Ethics [
], Aristotle posits the notion of the “good life” as the realization of the highest human capacities through virtue, wisdom, and justice. From this perspective, a “good death” implies respecting the dignity and well-being of the patient even at the end of life.For his part, Immanuel Kant, in the Foundation of the Metaphysics of Morals [
], argues that human dignity is an intrinsic and inalienable value, which precludes treating people as mere means to an end, even in medical or technological contexts; this requires that any intervention, including the application of AI, respects the autonomy and inherent value of each patient.Finally, Emmanuel Lévinas, in Totality and Infinity [
], introduces the ethics of otherness, which stresses the importance of recognizing and preserving the uniqueness of the other. This approach is particularly relevant in palliative care, where care must focus on the individuality and dignity of the patient, avoiding technological reductionism that can depersonalize the end-of-life experience.These 3 philosophical frameworks provide a sound basis for critically analyzing the opportunities and ethical challenges of integrating AI in palliative care.
We hypothesize that the application of AI in palliative medicine simultaneously offers significant opportunities for personalizing care and presents ethical risks that may compromise patient dignity. This study seeks to explore and examine this hypothesis through an integrative analysis of the recent literature.
Objectives
- We aim to examine current and potential applications of AI in palliative care: In this systematic review of the literature and recent cases, we will attempt to identify how AI is being used in end-of-life care, including tools for symptom management, clinical decision support, and communication between professionals, patients, and families.
- To analyze the ethical implications of using AI in palliative care, we will investigate the ethical dilemmas arising from the integration of intelligent technologies in this field, such as privacy and handling of sensitive data, patient autonomy, equity in access to technology, and the possibility of depersonalizing care.
- We aim to assess the impact of AI on patient experience and dignity at the end of life: To assess how the presence of AI influences the perception of quality of life, respect for dignity, and satisfaction of the emotional and spiritual needs of patients and their families.
- We aim to propose recommendations for the ethical implementation of AI in palliative care.
Methods
Study Design
This research was carried out as an integrative review, which allows for synthesizing information from various study designs and offers a wide viewpoint on a challenging subject. The review included studies published in Spanish, Portuguese, and English between 2020 and January 2025. Due to their applicability in the biological and technological domains, the scientific databases consulted were PubMed, Scopus, and Google Scholar. Given the rapid development of the field over the past 5 years, we chose to focus on this recent period to ensure the inclusion of the most up-to-date and relevant advancements in AI applied to palliative medicine. This decision is supported by recent systematic reviews documenting a significant increase in the number and scope of published studies in this area [
].Search and Selection Process
The search strategy was designed to ensure a rigorous and systematic approach. Keywords such as “artificial intelligence,” “palliative care,” “palliative medicine,” “medical ethics,” “machine learning,” and related combinations were used (
). The search process followed the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses; ) guidelines, which provide a standardized framework for transparent and comprehensive reporting, including a 29-item checklist and a 4-phase flow diagram to document the identification, screening, eligibility, and inclusion of studies according to Page et al [ ].Study eligibility was assessed in 2 stages:
- Title and abstract screening: Two reviewers screened all papers and abstracts for relevance.
- Full-text review: The same reviewers independently assessed the full texts of potentially eligible studies.
Two reviewers independently conducted both the study selection and the quality assessment processes. Any discrepancies between reviewers were resolved through consensus discussions or, if necessary, by consulting a third reviewer.
Inclusion criteria were (1) studies published between 2020 and 2025, (2) research addressing the use of AI in palliative medicine, and (3) articles analyzing ethical implications or the patient experience in this context.
Exclusion criteria were (1) studies not explicitly focused on palliative medicine, (2) research lacking ethical analysis or patient experience analysis, and (3) duplicate or non–peer-reviewed publications.
The PRISMA flow diagram summarizes the study selection process, including the number of records identified, screened, excluded, and included at each stage.
Data Extraction
Data extraction was performed independently by 2 reviewers (AGA and ASV). Any discrepancies were discussed and resolved collaboratively. Extracted data included study design, population, AI application, ethical focus, primary findings, and limitations.
Quality Appraisal
The quality of the included studies was assessed using 2 tools to ensure methodological rigor. Two reviewers performed the quality assessment independently, with discrepancies resolved by consensus.
The Critical Appraisal Skills Programme (CASP) checklist [
] was applied as a complementary tool to further assess each study’s methodological quality and transparency ( ).Study area and author (year) | Key contribution | CASP rating | |
Prediction and clinical decision-making | |||
Balch et al (2024) [ | ]Review of AIb for predicting PROc measures | Medium | |
Strand et al (2024) [ | ]AI/MLd model to identify hospitalized patients with cancer needing palliative care | High | |
He et al (2024) [ | ]Effective in targeting palliative support | High | |
Liu et al (2023) [ | ]Accurate prediction of short-term mortality | Medium | |
Heinzen et al (2023) [ | ]Improved early referral to palliative care | High | |
Morgan et al (2022) [ | ]AI improved early identification | High | |
Porter et al (2020) [ | ]Critical reflection on risks or opportunities of AI prediction in palliative care | Medium | |
Symptom management and quality of life | |||
Salama et al (2024) [ | ]A systematic review of AI/ML in cancer pain management | High | |
Lazris et al (2024) [ | ]Comparison of AI-generated content (ChatGPT) vs NCCNe guidelines for cancer symptoms | Medium | |
Ott et al (2023) [ | ]Impact of smart sensors on the “total care“ principle in palliative care | Medium | |
Deutsch et al (2023) [ | ]Improved monitoring and better symptom tracking | High | |
Yang et al (2021) [ | ]Wearables and ML to predict 7-day mortality in terminal cancer | Medium | |
Communication and emotional support | |||
Gondode et al (2024) [ | ]Performance of AI chatbots (ChatGPT vs Gemini) in palliative care education | Medium | |
Srivastava and Srivastava (2023) [ | ]GPT-3’s potential to improve palliative care communication | Medium | |
Process automation and modeling | |||
Reason et al (2024) [ | ]LLMsf for automating economic modeling in health care | Low | |
Kamdar et al (2020) [ | ]Debate on AI’s future in palliative care or hospice (benefits vs risks) | Medium | |
Windisch et al (2020) [ | ]AI’s role in improving the timing or quality of palliative interventions | Medium | |
Ethics and challenges | |||
See (2024) [ | ]AI as an ethical advisor in clinical contexts | Medium | |
Adegbesan et al (2024) [ | ]Ethical challenges of AI integration in palliative care | High | |
Ranard et al (2024) [ | ]Minimizing algorithmic biases in critical care via AI | High | |
De Panfilis et al (2023) [ | ]Framework for future policy | High | |
Ferrario et al (2023) [ | ]Ethics of algorithmic prediction of end-of-life preferences | High | |
Meier et al (2022) [ | ]Framework for ethical algorithm-based clinical decisions | Medium | |
Research and review of advances | |||
Bozkurt et al (2024) [ | ]Protocol for assessing AI data diversity in palliative care | Medium | |
Macheka et al (2024) [ | ]Prospective assessment of AI in postdiagnostic cancer care. High feasibility and good patient feedback. | High | |
Vu et al (2023) [ | ]Systematic review of ML applications in palliative care | High | |
Reddy et al (2023) [ | ]Review of AI advances for palliative cancer care | Medium | |
Barry et al (2023) [ | ]Challenges for evidence-based palliative care delivery | Medium | |
Chua et al (2021) [ | ]Path to AI implementation in oncology; pragmatic roadmap created | Medium |
aCASP: Critical Appraisal Skills Programme.
bAI: artificial intelligence.
cPRO: patient-reported outcome.
dML: machine learning.
eNCCN: National Comprehensive Cancer Network.
fLLM: large language model.
The Hawker et al (2002) [
] checklist allows for systematic evaluation across diverse research designs. Each study was scored independently across 11 domains: clarity of purpose, study design, methodology, sampling, data analysis, ethical implications, relevance, transferability, results, discussion, and theoretical basis. Scores range from 1 (very poor) to 4 (good) for each domain; an overall quality score was calculated as the mean of all domain scores ( ).Study | Clarity of purpose | Design | Methodology | Sampling | Analysis | Ethical implications | Relevance | Transferability | Results | Discussion | Theoretical basis | Overall quality |
Balch et al [ | ]4 | 4 | 4 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 3.27 |
Strand et al [ | ]4 | 3 | 3 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 3.09 |
He et al [ | ]2 | 3 | 3 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 2.91 |
Liu et al [ | ]4 | 3 | 3 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 3.09 |
Heinzen et al [ | ]4 | 3 | 3 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 3.09 |
Morgan et al [ | ]4 | 3 | 2 | 3 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 3.09 |
Porter et al [ | ]2 | 3 | 3 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 2.91 |
Salama et al [ | ]4 | 3 | 3 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 3.09 |
Lazris et al [ | ]4 | 3 | 3 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 3.09 |
Ott et al [ | ]4 | 3 | 3 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 3.09 |
Deutsch et al [ | ]2 | 3 | 3 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 2.91 |
Yang et al [ | ]4 | 3 | 3 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 3.09 |
Gondode et al [ | ]4 | 3 | 3 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 3.09 |
Srivastava and Srivastava [ | ]4 | 3 | 3 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 3.09 |
Reason et al [ | ]4 | 3 | 3 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 3.09 |
Kamdar et al [ | ]2 | 3 | 3 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 2.91 |
Windisch et al [ | ]4 | 3 | 3 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 3.09 |
See [ | ]4 | 3 | 3 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 3.09 |
Adegbesan et al [ | ]4 | 3 | 3 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 3.09 |
Ranard et al [ | ]2 | 3 | 3 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 2.91 |
De Panfilis et al [ | ]4 | 3 | 3 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 3.09 |
Ferrario et al [ | ]4 | 3 | 3 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 3.09 |
Meier et al [ | ]4 | 3 | 3 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 3.09 |
Bozkurt et al [ | ]2 | 3 | 3 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 2.91 |
Macheka et al [ | ]4 | 3 | 3 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 3.09 |
Vu et al [ | ]4 | 3 | 3 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 3.09 |
Reddy et al [ | ]4 | 3 | 3 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 3.09 |
Barry et al [ | ]2 | 3 | 3 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 2.91 |
Chua et al [ | ]4 | 3 | 3 | 2 | 3 | 3 | 3 | 4 | 4 | 3 | 3 | 3.09 |
Thematic Analysis
Extracted data were thematically coded and grouped into 6 key categories—prediction and clinical decision-making, symptom management and quality of life, communication and emotional support, process automation and modeling, ethical implications, and research and review of advances in AI [
]—reflecting the main areas presented in the Results section.Ensuring Methodological Rigor
To maximize the reliability and validity of the review:
- Triangulation: findings were compared across studies to identify consistent patterns.
- Peer review: the methodology was reviewed using bioethics and AI.
- Critical evaluation: each study was assessed for quality, relevance, and validity using the criteria above.
This process allowed identifying strengths, limitations, and potential biases in the included studies [
].Results
Overview
This section presents a thematic synthesis of the main findings regarding the ethical and practical implications of AI in palliative medicine at the end of life. The 29 included studies, published between 2020 and January 2025, covered various clinical contexts, populations, and AI applications. The results are structured in 6 key areas identified in the literature (
).
The following tables provide a comprehensive synthesis of the 29 studies included in this review.
In
, we present the ratings obtained for the methodological quality of each study according to the CASP checklist. Two independent reviewers rated each item as “yes,” “no,” or “unclear,” and the percentage of responses was calculated to assign an overall rating of high, medium, or low.shows the quality scores in 9 domains using the Hawker et al [ ] instrument. Two reviewers rated each domain from 1 (very poor) to 4 (excellent), and we report both the individual domain scores and the overall mean quality score for each item.
summarizes the key characteristics of the included studies. For each study, we list the author, year, country, design, AI/ML application, population or setting, study objective, and principal findings, and group them according to the 5 thematic areas identified in our review.
The analysis of the included studies revealed 6 key thematic areas in the application of AI in palliative care. These thematic areas provide a comprehensive overview of the current landscape and highlight both the opportunities and challenges presented by AI in this field.
Author (year) | Country | Study design | AIa application | Population or context | Study aim | Key findings |
Balch et al (2024) [ | ]United States | Review | AI for predicting PROsb | Patients with advanced cancer | Explore the use of AI in predicting PROs | AI shows potential but lacks validation |
Strand et al (2024) [ | ]United States | MLc model development | Mortality prediction tool | Hospitalized patients with cancer | Develop a model to identify palliative needs | High predictive value for end-of-life care |
He et al (2024) [ | ]United States | Cohort study | ML for palliative consultation allocation | Patients with cancer | Assign consultations based on predicted need | Effective in targeting palliative support |
Liu et al (2023) [ | ]Taiwan | Observational | Wearables and ML | Patients with terminal cancer | Predict mortality risk in real time | Accurate prediction of short-term mortality |
Heinzen et al (2023) [ | ]Germany | RCTd | ML timing intervention | Primary care | Assess the impact on care timing | Improved early referral to palliative care |
Morgan et al (2022) [ | ]United States | RCT | AI prediction of care needs | Advanced cancer | Evaluate AI vs traditional triage | AI improved early identification |
Porter et al (2020) [ | ]United Kingdom | Critical reflection | Ethical analysis of prediction | General palliative care | Reflect on risks and values in AI prediction | Warns about the dehumanization risk |
Salama et al (2024) [ | ]United States | Systematic review | ML in pain management | Patients with cancer | Evaluate AI effectiveness in pain treatment | Supports the integration of AI tools |
Lazris et al (2024) [ | ]United States | Comparison study | ChatGPT vs NCCNe | Cancer symptom guidance | Evaluate content quality | AI is aligned with guidelines in most areas |
Ott et al (2023) [ | ]Germany | Observational | Smart sensors for monitoring | Palliative care patients | Assess the “total care” principle using technology | Improved monitoring and better symptom tracking |
Deutsch et al (2023) [ | ]Germany | Observational | ML for PROs monitoring | Metastatic cancer | Track patient outcomes | Improved reporting and early alerts |
Yang et al (2021) [ | ]China | Cohort study | Wearables and ML | Terminal cancer | Predict 7-day mortality | High predictive accuracy |
Gondode et al (2024) [ | ]United States | Comparative study | ChatGPT vs Gemini in education | Health care professionals | Compare chatbot effectiveness | Both tools are effective; Gemini is more accurate |
Srivastava and Srivastava (2023) [ | ]India | Exploratory | GPT-3 communication support | General palliative population | Examine AI in patient-clinician dialogue | Potential for improving conversations |
Reason et al (2024) [ | ]United Kingdom | Implementation study | LLMsf for economic modeling | Palliative systems planning | Automate health economic models | LLMs reduce the workload but need oversight |
Kamdar et al (2020) [ | ]United States | Debate or commentary | AI in palliative care | General palliative systems | Debate AI’s pros and cons | Highlights opportunities and ethical risks |
Windisch et al (2020) [ | ]Germany | Case study | AI-enhanced timing | Hospital-based care | Improve intervention timing | Faster, more targeted responses |
See (2024) [ | ]United States | Qualitative | AI as an ethical advisor | Oncology settings | Evaluate AI-generated ethical suggestions | Useful but lacking nuance |
Adegbesan et al (2024) [ | ]Nigeria | Thematic analysis | Ethical challenges of AI | Low-resource settings | Explore equity and justice issues | AI raises equity concerns |
Ranard et al (2024) [ | ]United States | Technical study | Bias minimization algorithms | Critical care AI | Reduce bias in predictions | The algorithm reduced disparities in results |
De Panfilis et al (2023) [ | ]Italy | Conceptual framework | Ethical issues in AI | Palliative care context | Define ethical concerns | Framework for future policy |
Ferrario et al (2023) [ | ]United Kingdom | Ethical analysis | Predictive systems for end-of-life | Hospice settings | Assess algorithmic risks | Need for transparent systems |
Meier et al (2022) [ | ]United States | Framework proposal | AI clinical decision support | Advanced illness patients | Propose ethical guidance | Applicable for clinical protocol design |
Bozkurt et al (2024) [ | ]Turkey | Protocol | Diversity metrics in data | Mixed cancer cohorts | Establish a framework for diversity | Supports inclusive data use |
Macheka et al (2024) [ | ]Zimbabwe | Prospective evaluation | AI in postdiagnosis care | Rural patients with cancer | Evaluate implementation outcomes | High feasibility and good patient feedback |
Vu et al (2023) [ | ]Switzerland | Systematic review | ML in palliative care | Various populations | Map applications and outcomes | AI is growing in scope and evidence |
Reddy et al (2023) [ | ]India | Narrative review | AI for palliative oncology | Patients with cancer | Summarize recent AI use | Progress is seen, but fragmented evidence |
Barry et al (2023) [ | ]United States | Survey | Evidence-based palliative AI | Clinicians and patients | Identify barriers to adoption | Concerns about data quality and trust |
Chua et al (2021) [ | ]Singapore | Implementation framework | AI in oncology | Urban hospitals | Design path for AI adoption | Pragmatic roadmap created |
aAI: artificial intelligence.
bPRO: patient-reported outcome.
cML: machine learning.
dRCT: randomized controlled trial.
eNCCN: National Comprehensive Cancer Network.
fLLM: large language model.
Prediction and Clinical Decision-Making
AI has demonstrated significant potential in supporting clinical decision-making and anticipating patient needs in palliative care. For example, Strand et al [
] developed a machine learning model that more accurately identified hospitalized patients with cancer who could benefit from specialized palliative care, outperforming traditional approaches. Similarly, Salama et al [ ] and Liu et al [ ] reported that AI tools can help personalize pain management and predict imminent terminal events, allowing for more timely interventions. However, Porter et al [ ] cautioned that excessive reliance on algorithmic predictions, especially when models lack transparency or interpretability, may undermine human sensitivity and clinical judgment in complex palliative care scenarios.Symptom Management and Quality of Life
Effective symptom management and quality of life improvement are central in palliative medicine, and AI tools are being tested in oncological and nononcological populations. AI applications have facilitated individualized pain control and symptom monitoring for patients with cancer [
]. In noncancer contexts, Ott et al [ ] described using smart sensors for real-time symptom tracking in neurodegenerative diseases. However, studies such as by Deutsch et al [ ] highlighted the risk of bias when training datasets underrepresent specific populations (patients without cancer or minority groups), potentially limiting the generalizability and fairness of AI-driven symptom management.Communication and AI Tools
AI-based communication tools, including chatbots and natural language processing models, have been explored to support interactions between professionals, patients, and families. Gondode et al [
] and Srivastava and Srivastava [ ] analyzed large language models GPT-3 to facilitate information delivery and emotional support. However, these tools often reflect Western bioethical principles. They may not adapt well to cultural contexts where family-centered decision-making or gradual truth disclosure is preferred [ ]. Several studies reported that AI trained on Anglo-Saxon datasets may misinterpret emotional cues or cultural preferences, underlining the need for culturally sensitive models and community co-design.Process Automation and Modeling
AI process automation can optimize resource allocation and improve efficiency in palliative care. Reason et al [
] demonstrated how large language models can automate economic modeling, potentially reducing costs and improving service access. Windisch et al [ ] emphasized the benefits of AI in improving the timing and quality of palliative interventions. However, Kamdar et al [ ] stressed the importance of maintaining a patient-centered approach, even in highly automated environments.Ethical Implications
The ethical challenges of AI in palliative medicine are complex and multifaceted. Ferrario et al [
] analyzed the need for transparency and accountability in algorithmic prediction of end-of-life preferences. Ranard et al [ ] addressed the risks of algorithmic bias, especially when models are trained on unrepresentative data, which can lead to inequitable care decisions. Finally, See [ ] explored the potential of AI as an ethical advisor but noted that this application is still in its early stages and requires further research.Regarding the ethical design of data-driven decision support tools, Bak et al [
] discuss the importance of considering ethical principles, such as algorithmic fairness and privacy, in the development of decision support tools in oncology, with direct implications for palliative medicine.Furthermore, regarding the ethical design of data-driven decision support tools, Balch et al [
] discuss the importance of considering ethical principles, such as algorithmic fairness and privacy, in developing decision support tools in oncology, with direct implications for palliative medicine.Ethical considerations should also be considered in cancer chatbots. In their study, Chow et al [
- ] address the need for transparency and informed consent in the use of AI-based chatbots in cancer care, emphasizing the risks of dehumanization and loss of trust.Research and Review of Advances in AI
Recent reviews and methodological studies have documented advances and limitations in AI applications for palliative care. Vu et al [
] systematically reviewed ML applications, highlighting the need for more robust, real-world evidence. Reddy et al [ ] summarized advances in AI-based symptom management, and Bozkurt et al [ ] developed protocols to assess the robustness of these applications. Macheka et al [ ] evaluated the included role of AI in postdiagnostic treatment pathways, emphasizing the need for continuous research and ethical oversight.AI offers innovative prediction, symptom management, communication, and resource optimization solutions in palliative medicine. However, its implementation is accompanied by significant ethical, cultural, and practical challenges, especially regarding equity, humanization, and respect for patient autonomy. The literature highlights the importance of addressing patient heterogeneity, cultural context, and social determinants to ensure that AI applications are practical and ethically acceptable.
Discussion
Principal Findings
This integrative review demonstrates that AI is increasingly present in palliative care, offering innovative solutions for clinical prediction, symptom management, communication, and process automation. The main findings suggest that while AI can improve efficiency and support decision-making, there remains a significant lack of consolidated, real-world examples that simultaneously demonstrate both efficiency and equity in outcomes. Most published studies focus on technical feasibility or operational improvements, but few document how AI enhances equitable access or patient-centered outcomes across diverse populations. This gap underscores the need for more robust, contextually grounded evidence to guide the ethical implementation of AI in end-of-life care [
].A key finding is the persistent tension between efficiency and quality. Although frameworks such as the IOM and OECD recognize efficiency, equity, and patient-centeredness as embedded dimensions of quality, our review shows that improvements in system-level efficiency (resource optimization, automated symptom tracking) do not always translate into perceived improvements in care quality by patients and families. In palliative care, relational and dignity-centered outcomes, such as humanization, emotional support, and respect for autonomy, remain fundamental and may be at risk if AI is implemented without careful ethical consideration [
].It should be noted that the paper distinguishes between “quality” of care and “efficiency.” In contrast, efficiency, equity, and patient-centeredness are internationally recognized as embedded measures of quality of care. According to frameworks such as the IOM [
], quality of care encompasses 6 interrelated domains: safety, effectiveness, patient-centeredness, timeliness, efficiency, and equity. Thus, efficiency is not a separate attribute but an integral component of quality, alongside equity and patient-centeredness. We recommend harmonizing the terminology and analysis to reflect this international consensus, avoiding an artificial dichotomy between quality and efficiency.Comparison to Prior Work
Our findings align with previous reviews, highlighting both AI’s transformative potential in palliative settings and the ethical challenges it introduces. Recent literature confirms that AI’s integration in palliative care is still early, with limited robust evidence for improved equity or patient experience. While some studies report promising advances in prediction and symptom management, others caution about algorithmic bias, lack of transparency, and the risk of dehumanization [
].The review expands on prior work by addressing ethical principles’ historical and cultural variability. Many AI tools in palliative care are developed and validated in Western contexts, reflecting assumptions about autonomy, truth-telling, and individual decision-making that may not be universally applicable. Studies show that AI models trained on Anglo-Saxon datasets can misinterpret emotional cues or cultural preferences, particularly in Southern European, Latin American, or other non-Western settings where family-centered decision-making and gradual truth disclosure are everyday occurrences [
]. This highlights the need for culturally sensitive AI models and participatory design processes.Real Case: Mortality Prediction and Advance Care Planning
A recent study analyzed the implementation of an AI system designed to predict the likelihood of a patient dying in the next 12 months to facilitate timely discussions about palliative care. The study developed an explainable ML model using electronic health records data to proactively identify patients with advanced cancer at high risk of mortality. The model demonstrated strong predictive performance (area under receiver operating characteristic curve 0.861) and was intended to support early integration of palliative care in outpatient oncology settings [
]. However, introducing these predictions into the clinical process generated significant disagreements among health care professionals, patients, and family members. The main concerns included:- Patient autonomy: algorithmic predictions were sometimes used without adequate informed consent, potentially undermining patients’ ability to decide about their care.
- Justice and equity: models trained on unrepresentative data risked introducing bias, disproportionately affecting marginalized groups.
- Beneficence and nonmaleficence: overreliance on AI predictions led to inappropriate interventions, such as premature end-of-life planning, without considering individual complexity.
This case highlights the need for a rigorous ethical approach to integrating AI into palliative care, ensuring that technologies complement clinical judgement and human empathy, not replace them.
Strengths and Limitations
This integrative review has several strengths. First, it offers a comprehensive and up-to-date synthesis of the recent literature on AI in end-of-life palliative care, focusing on the period between 2020 and 2025. This time frame was chosen to reflect the most current technological developments and their ethical implications. Second, the review’s methodological rigor is reinforced using 2 quality appraisal tools, CASP and Hawker et al [
], which allowed a systematic evaluation across diverse study designs. Additionally, including philosophical, clinical, and ethical perspectives provides a multidimensional framework for understanding the challenges and opportunities of AI in this sensitive field. Furthermore, the review is enriched by presenting a real-world case, illustrating the practical dilemmas and tensions encountered when implementing predictive AI tools in clinical settings.Nonetheless, several limitations must be acknowledged. The narrowed time frame may have excluded relevant earlier studies; however, this was a deliberate choice to focus on recent developments most applicable to current practice. The language scope was restricted to English, Portuguese, and Spanish, which may limit the generalizability of findings to other cultural contexts. Additionally, the heterogeneity of study designs and the variability in reporting standards made it difficult to perform a meta-analysis, which was replaced by thematic synthesis. Another limitation is that while some studies addressed social determinants of health (SDOH), such as literacy, socioeconomic status, or geographic inequalities, this aspect was not systematically captured across the entire corpus. Finally, given the fast-evolving nature of AI technologies, the findings and conclusions may become outdated over time, highlighting the need for ongoing surveillance and review.
The review of Ortiz et al [
] addressed the opportunities and challenges in using wearable sensors to monitor patients with cancer, including data integration and user acceptance issues.Future Directions
Integrating AI into palliative care necessitates a balanced approach that prioritizes ethical rigor, patient-centered outcomes, and cultural adaptability. Based on recent evidence, the following directions are critical for advancing this field responsibly.
Enhancing Predictive Models With Diverse Data
ML models, such as those predicting mortality to facilitate early palliative care referrals in Medicare Advantage populations, demonstrate high accuracy but require broader validation across diverse demographics. Future research must prioritize datasets that include underrepresented groups (patients without cancer and ethnic minorities) to mitigate algorithmic bias and ensure equitable access to palliative services. For instance, models trained on SDOH and multi-institutional data could improve generalizability while addressing systemic disparities in end-of-life care [
].Ethical Co-Design of Decision Support Tools
The development of AI-driven decision support systems in oncology, as exemplified by the 4D PICTURE project, highlights the importance of participatory design involving patients, clinicians, and ethicists [
]. Key strategies include the following: (1) ethical review processes will need to be integrated to address data bias, privacy, and transparency; (2) ensuring AI tools align with cultural norms (family-centered decision-making in non-Western contexts); and (3) adopting frameworks such as design justice to empower marginalized voices in tool development.Hybrid Care Models Balancing Telehealth and Human Interaction
While telehealth improves accessibility and psychological comfort for palliative patients, studies emphasize that optimal care requires complementing AI tools with in-person interactions [
]. Future implementations should (1) integrate AI for routine monitoring (symptom tracking via wearables) while reserving complex decisions for clinician-patient dialogues and (2) address technical barriers (poor internet connectivity) that undermine interpersonal connection in telehealth.Transparency and Explainability in Mortality Prediction
Explainable AI models, such as the transparent mortality prediction tool for patients with cancer developed by Bertsimas et al [
], enhance clinician trust and facilitate shared decision-making. Recommendations include (1) standardizing reporting of model features (weight changes and albumin levels) to improve clinical interpretability and (2) validating tools in real-world settings to assess their impact on goals-of-care discussions and patient autonomy.AI as a Supplement, Not Replacement, for Human Judgment
Despite their reliability in answering breast cancer surgery queries (average score 3.98/5), AI chatbots such as ChatGPT must be framed as supplements to, not substitutes for, medical expertise [
]. Future work should (1) develop guardrails to prevent overreliance on AI, such as mandatory disclaimers urging users to consult clinicians, and (2) train chatbots to recognize cultural nuances in communication (gradual truth disclosure in specific populations).Ethical Deployment of Oncology Chatbots
Building on frameworks for human-centered AI in oncology [
], developers must prioritize (1) transparency: disclosing data sources and limitations to users, (2) autonomy: allowing patients to opt out of AI-driven interactions, and (3) equity: ensuring chatbots are accessible across literacy and socioeconomic levels.Conclusions
AI is poised to transform end-of-life palliative care, offering innovative clinical prediction, symptom management, communication, and resource optimization solutions. However, this review highlights that AI’s successful and ethical integration in such a sensitive field is far from straightforward.
First, while AI can improve efficiency and operational aspects of care, there is still a notable lack of consolidated, real-world examples demonstrating both efficiency and equity in outcomes. Most current evidence is derived from pilot studies or technical feasibility reports, with limited documentation of long-term, patient-centered benefits across diverse populations.
Second, the ethical challenges surrounding AI in palliative care are complex and multifaceted. Issues such as algorithmic bias, lack of transparency, risks to patient autonomy, and the potential for dehumanization must be addressed proactively. The review underscores that efficiency, equity, and patient-centeredness are not isolated goals but interrelated dimensions of quality care, as recognized by international frameworks such as the IOM.
Third, cultural context and patient heterogeneity are critical factors. Many AI tools are developed based on Western bioethical assumptions, which may not be universally applicable, especially in settings where family-centered decision-making or culturally nuanced understandings of pain and suffering prevail. Addressing SDOH and ensuring the inclusion of marginalized voices in AI development is essential for equitable implementation.
Fourth, patients’ and families’ experiences must remain at the heart of palliative care innovation. AI should complement, not replace, clinical judgment, human empathy, and the relational aspects that define quality end-of-life care.
Key Recommendations
Our key recommendations are to (1) develop clear policies and regulatory frameworks to ensure fairness, privacy, transparency, and accountability in using AI in palliative care; (2) promote the humanization of care, design AI tools that support, rather than supplant, compassionate human interaction and shared decision-making; (3) foster continuous, multidisciplinary research to rigorously evaluate the benefits and risks of AI, address algorithmic biases, and adapt tools to diverse clinical and cultural contexts; and (4) encourage participatory approaches involving patients, families, clinicians, ethicists, and community representatives in designing, implementing, and evaluating AI systems.
Final Perspective
Ultimately, the ethical adoption of AI in palliative medicine requires a careful balance between technological innovation and the preservation of human dignity. Only through patient-centered, culturally sensitive, and ethically grounded strategies can we maximize the benefits of AI while mitigating its risks, thus improving the experience and outcomes for patients and families at the end of life.
Acknowledgments
The authors would like to thank the individuals whose support facilitated the conduct of this study.
Authors' Contributions
Each author actively participated in this study’s conception, development, and writing. AGA, DGS, and ASV contributed to conceptualizing this study and its methodology, while FLC, ACB, and HM-F contributed technical expertise and interpretation of the findings. AGA and ASV participated in drafting this paper and critically revising the content. All authors have read and approved the final version of this paper.
Conflicts of Interest
None declared.
Review strategy and quality assessment.
DOCX File , 18 KBPRISMA checklist.
DOCX File , 271 KBReferences
- Bozkurt S, Fereydooni S, Kar I, Diop Chalmers C, Leslie SL, Pathak R, et al. Investigating data diversity and model robustness of AI applications in palliative care and hospice: protocol for scoping review. JMIR Res Protoc. 2024;13:e56353. [FREE Full text] [CrossRef] [Medline]
- Miralles F. La transformación de la salud mediante uso de datos e inteligencia artificial: ámbitos de aplicación, retos científico-tecnológicos y claves para la adopción. Fundació Víctor Grífols i Lucas. 2023. URL: https://www.fundaciogrifols.org/documents/4438882/5272129/Q63_inteligencia_artificial.pdf [accessed 2025-05-03]
- Windisch P, Hertler C, Blum D, Zwahlen D, Förster R. Leveraging advances in artificial intelligence to improve the quality and timing of palliative care. Cancers (Basel). 2020;12(5):1149. [FREE Full text] [CrossRef] [Medline]
- Tei S, Fujino J. Artificial intelligence, internet addiction, and palliative care. Eur. Psychiatr. 2024;67(S1):S339. [FREE Full text] [CrossRef]
- Ferrario A, Gloeckler S, Biller-Andorno N. Ethics of the algorithmic prediction of goal of care preferences: from theory to practice. J Med Ethics. 2023;49(3):165-174. [FREE Full text] [CrossRef] [Medline]
- Preliminary report on the first draft of the recommendation on the ethics of artificial intelligence [Internet]. UNESCO. 2021. URL: https://unesdoc.unesco.org/ark:/48223/pf0000374266 [accessed 2025-05-03]
- Barcelona Declaration for the proper development and usage of artificial intelligence in Europe [Internet]. IIIA CSIC. 2017. URL: https://www.iiia.csic.es/barcelonadeclaration/ [accessed 2025-05-03]
- From principles to practice: an interdisciplinary framework to operationalise AI ethics [Internet]. AI Ethics Impact Group. 2020. URL: https://www.ai-ethics-impact.org/en [accessed 2025-05-03]
- Technical methods for regulatory inspection of algorithmic systems in social media platforms [Internet]. Ada Lovelace Institute. 2021. URL: https://www.adalovelaceinstitute.org/report/technical-methods-regulatory-inspection/ [accessed 2025-05-03]
- OECD. OECD framework for the classification of AI systems. OECD Digit Econ Paper. Organisation for Economic Co-Operation and Development. 2023. URL: https://www.oecd.org/en/publications/oecd-framework-for-the-classification-of-ai-systems_cb6d9eca-en.html [accessed 2025-05-03]
- Aristotle. Crisp R, editor. Aristotle: Nicomachean Ethics. Cambridge. Cambridge University Press; 2000.
- Kant I. Gregor M, Timmermann J, editors. Immanuel Kant: Groundwork of the Metaphysics of Morals: A German English. Cambridge. Cambridge University Press; 2011.
- Idareta Goldaracena F, Úriz Peman MJ. Aportaciones de la ética de la alteridad de E. Lévinas y la ética del cuidado de C. Gilligan a la intervención en trabajo social. Alternativas. 2012;(19):33-44. [CrossRef]
- Vu E, Steinmann N, Schröder C, Förster R, Aebersold DM, Eychmüller S, et al. Applications of machine learning in palliative care: a systematic review. Cancers (Basel). 2023;15(5):1596. [CrossRef] [Medline]
- Page MJ, Moher D, McKenzie JE. Introduction to preferred reporting items for systematic reviews and meta-analyses 2020 and implications for research synthesis methodologists. Res Synth Methods. 2022;13(2):156-163. [CrossRef] [Medline]
- CASP checklists. [Internet]. Critical Appraisal Skills Programme: CASP. URL: https://casp-uk.net/casp-tools-checklists/ [accessed 2025-05-03]
- Balch JA, Chatham AH, Hong PKW, Manganiello L, Baskaran N, Bihorac A, et al. Predicting patient reported outcome measures: a scoping review for the artificial intelligence-guided patient preference predictor. Front Artif Intell. 2024;7:1477447. [FREE Full text] [CrossRef] [Medline]
- Strand JJ, Morgan AA, Pachman DR, Storlie CB, Wilson PM. Performance of an artificial intelligence/machine learning model designed to identify hospitalized patients with cancer who could benefit from timely specialized palliative care delivery. JCO. 2024;42(16_suppl):1559-1559. [FREE Full text] [CrossRef]
- He JC, Moffat GT, Podolsky S, Khan F, Liu N, Taback N, et al. Machine learning to allocate palliative care consultations during cancer treatment. J Clin Oncol. 2024;42(14):1625-1634. [CrossRef] [Medline]
- Liu JH, Shih CY, Huang HL, Peng JK, Cheng SY, Tsai JS, et al. Evaluating the potential of machine learning and wearable devices in end-of-life care in predicting 7-day death events among patients with terminal cancer: cohort study. J Med Internet Res. 2023;25:e47366. [FREE Full text] [CrossRef] [Medline]
- Heinzen EP, Wilson PM, Storlie CB, Demuth GO, Asai SW, Schaeferle GM, et al. Impact of a machine learning algorithm on time to palliative care in a primary care population: protocol for a stepped-wedge pragmatic randomized trial. BMC Palliat Care. 2023;22(1):9. [FREE Full text] [CrossRef] [Medline]
- Morgan A, Karow J, Olson E, Strand J, Wilson P, Soleimani J, et al. Randomized trial of a novel artificial intelligence/machine learning model to predict the need for specialty palliative care. J Pain Symptom Manage. 2022;63(5):879-880. [FREE Full text] [CrossRef]
- Porter AS, Harman S, Lakin JR. Power and perils of prediction in palliative care. Lancet. 2020;395(10225):680-681. [CrossRef] [Medline]
- Salama V, Godinich B, Geng Y, Humbert-Vidan L, Maule L, Wahid KA, et al. Artificial intelligence and machine learning in cancer pain: a systematic review. J Pain Symptom Manage. 2024;68(6):e462-e490. [FREE Full text] [CrossRef] [Medline]
- Lazris D, Schenker Y, Thomas TH. AI-generated content in cancer symptom management: a comparative analysis between ChatGPT and NCCN. J Pain Symptom Manage. 2024;68(4):e303-e311. [FREE Full text] [CrossRef] [Medline]
- Ott T, Heckel M, Öhl N, Steigleder T, Albrecht NC, Ostgathe C, et al. Palliative care and new technologies. The use of smart sensor technologies and its impact on the total care principle. BMC Palliat Care. 2023;22(1):50. [FREE Full text] [CrossRef] [Medline]
- Deutsch TM, Pfob A, Brusniak K, Riedel F, Bauer A, Dijkstra T, et al. Machine learning and patient-reported outcomes for longitudinal monitoring of disease progression in metastatic breast cancer: a multicenter, retrospective analysis. Eur J Cancer. 2023;188:111-121. [CrossRef] [Medline]
- Yang TY, Kuo P, Huang Y, Lin H, Malwade S, Lu L, et al. Deep-learning approach to predict survival outcomes using wearable actigraphy device among end-stage cancer patients. Front Public Health. 2021;9:730150. [FREE Full text] [CrossRef] [Medline]
- Gondode PG, Mahor V, Rani D, Ramkumar R, Yadav P. Debunking palliative care myths: assessing the performance of artificial intelligence chatbots (ChatGPT vs. Google Gemini). IJPC. 2024;30:284-287. [FREE Full text] [CrossRef]
- Srivastava R, Srivastava S. Can artificial intelligence aid communication? Considering the possibilities of GPT-3 in Palliative care. IJPC. 2023;29:418-425. [FREE Full text] [CrossRef]
- Reason T, Rawlinson W, Langham J, Gimblett A, Malcolm B, Klijn S. Artificial intelligence to automate health economic modelling: a case study to evaluate the potential application of large language models. Pharmacoecon Open. 2024;8(2):191-203. [FREE Full text] [CrossRef] [Medline]
- Kamdar M, Lakin J, Zhang H. Artificial intelligence, machine learning, and digital therapeutics in palliative care and hospice: the future of compassionate care or rise of the robots? (TH363). J Pain Symptom Manage. 2020;59(2):434-435. [FREE Full text] [CrossRef]
- See KC. Using artificial intelligence as an ethics advisor. Ann Acad Med Singap. 2024;53(7):454-455. [FREE Full text] [CrossRef] [Medline]
- Adegbesan A, Akingbola A, Ojo O, Jessica OU, Alao UH, Shagaya U, et al. Ethical challenges in the integration of artificial intelligence in palliative care. J Med, Surg, Public Health. 2024;4:100158. [FREE Full text] [CrossRef]
- Ranard BL, Park S, Jia Y, Zhang Y, Alwan F, Celi LA, et al. Minimizing bias when using artificial intelligence in critical care medicine. J Crit Care. 2024;82:154796. [FREE Full text] [CrossRef] [Medline]
- De Panfilis L, Peruselli C, Tanzi S, Botrugno C. AI-based clinical decision-making systems in palliative medicine: ethical challenges. BMJ Support Palliat Care. 2023;13(2):183-189. [CrossRef] [Medline]
- Meier LJ, Hein A, Diepold K, Buyx A. Algorithms for ethical decision-making in the clinic: a proof of concept. Am J Bioeth. 2022;22(7):4-20. [CrossRef] [Medline]
- Macheka S, Ng PY, Ginsburg O, Hope A, Sullivan R, Aggarwal A. Prospective evaluation of artificial intelligence (AI) applications for use in cancer pathways following diagnosis: a systematic review. BMJ Oncol. 2024;3(1):e000255. [CrossRef] [Medline]
- Reddy V, Nafees A, Raman S. Recent advances in artificial intelligence applications for supportive and palliative care in cancer patients. Curr Opin Support Palliat Care. 2023;17(2):125-134. [CrossRef] [Medline]
- Barry C, Paes P, Noble S, Davies A. Challenges to delivering evidence-based palliative medicine. Clin Med (Lond). 2023;23(2):182-184. [FREE Full text] [CrossRef] [Medline]
- Chua IS, Gaziel-Yablowitz M, Korach ZT, Kehl KL, Levitan NA, Arriaga YE, et al. Artificial intelligence in oncology: path to implementation. Cancer Med. 2021;10(12):4138-4149. [CrossRef] [Medline]
- Hawker S, Payne S, Kerr C, Hardey M, Powell J. Appraising the evidence: reviewing disparate data systematically. Qual Health Res. 2002;12(9):1284-1299. [CrossRef] [Medline]
- Cavaciuti M, Nwosu AC. 92 ethical challenges of artificial intelligence technology in palliative care. BMJ Supportive Palliative Care. 2020;10:A41. [FREE Full text] [CrossRef]
- Bak M, Hartman L, Graafland C, Korfage IJ, Buyx A, Schermer M, et al. 4D PICTURE Consortium. Ethical design of data-driven decision support tools for improving cancer care: embedded ethics review of the 4D PICTURE project. JMIR Cancer. 2025;11:e65566. [FREE Full text] [CrossRef] [Medline]
- Chow JCL, Li K. Ethical considerations in human-centered AI: advancing oncology chatbots through large language models. JMIR Bioinform Biotechnol. 2024;5:e64406. [FREE Full text] [CrossRef] [Medline]
- Johnson JL, Adkins D, Chauvin S. A review of the quality indicators of rigor in qualitative research. Am J Pharm Educ. 2020;84(1):7120. [FREE Full text] [CrossRef] [Medline]
- Karatzanou N. Artificial intelligence (AI) in palliative care: ethical challenges. Bioethica. 2025;11(1):51-63. [FREE Full text] [CrossRef]
- Oh O, Demiris G, Ulrich CM. The ethical dimensions of utilizing artificial intelligence in palliative care. Nurs Ethics. 2024:09697330241296874. [FREE Full text] [CrossRef] [Medline]
- Agency for Healthcare Research and Quality. Six domains of healthcare quality [Internet]. URL: https://www.ahrq.gov/talkingquality/measures/six-domains.html [accessed 2025-05-03]
- Corti C, Cobanaj M, Marian F, Dee EC, Lloyd MR, Marcu S, et al. Artificial intelligence for prediction of treatment outcomes in breast cancer: systematic review of design, reporting standards, and bias. Cancer Treat Rev. 2022;108:102410. [CrossRef] [Medline]
- Zhuang Q, Zhang AY, Cong RSTY, Yang GM, Neo PSH, Tan DS, et al. Towards proactive palliative care in oncology: developing an explainable EHR-based machine learning model for mortality risk prediction. BMC Palliat Care. 2024;23(1):124. [FREE Full text] [CrossRef] [Medline]
- Bowers A, Drake C, Makarkin AE, Monzyk R, Maity B, Telle A. Predicting patient mortality for earlier palliative care identification in medicare advantage plans: features of a machine learning model. JMIR AI. 2023;2:e42253. [FREE Full text] [CrossRef] [Medline]
- Ortiz BL, Gupta V, Kumar R, Jalin A, Cao X, Ziegenbein C, et al. Data preprocessing techniques for AI and machine learning readiness: scoping review of wearable sensor data in cancer care. JMIR mHealth uHealth. 2024;12:e59587. [FREE Full text] [CrossRef] [Medline]
- Kalla M, O'Brien T, Metcalf O, Hoda R, Chen X, Li A, et al. Understanding experiences of telehealth in palliative care: photo interview study. JMIR Hum Factors. 2025;12:e53913. [FREE Full text] [CrossRef] [Medline]
- Bertsimas D, Dunn J, Pawlowski C, Silberholz J, Weinstein A, Zhuo YD, et al. Applied informatics decision support tool for mortality predictions in patients with cancer. JCO Clin Cancer Inform. 2018;2:1-11. [FREE Full text] [CrossRef] [Medline]
- Roldan-Vasquez E, Mitri S, Bhasin S, Bharani T, Capasso K, Haslinger M, et al. Reliability of artificial intelligence chatbot responses to frequently asked questions in breast surgical oncology. J Surg Oncol. 2024;130(2):188-203. [CrossRef] [Medline]
Abbreviations
AI: artificial intelligence |
CASP: Critical Appraisal Skills Programme |
IOM: Institute of Medicine |
ML: machine learning |
OECD: Organisation for Economic Co-Operation and Development |
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses |
SDOH: social determinants of health |
Edited by T de Azevedo Cardoso; submitted 06.03.25; peer-reviewed by E Ray Chaudhuri, MK Milic; comments to author 14.04.25; revised version received 22.04.25; accepted 29.04.25; published 14.05.25.
Copyright©Abel García Abejas, David Geraldes Santos, Fabio Leite Costa, Aida Cordero Botejara, Helder Mota-Filipe, Àngels Salvador Vergés. Originally published in the Interactive Journal of Medical Research (https://www.i-jmr.org/), 14.05.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Interactive Journal of Medical Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.i-jmr.org/, as well as this copyright and license information must be included.