Clinical Prediction Model for 1-Year Mortality in Patients With Advanced Cancer (2024)

Key Points

Question Can a predictive model be developed and validated to calculate the 1-year risk of death among patients with advanced cancer by combining clinician responses to the surprise question (“Would I be surprised if this patient died in the next year?”) with patients’ clinical characteristics and laboratory values?

Findings In this prognostic study that included 867 patients with advanced cancer, the developed model combining the surprise question, clinical characteristics, and laboratory values had better discriminative ability in predicting 1-year risk of death than the surprise question, clinical characteristics, or laboratory values alone. A nomogram was developed to aid clinicians in identifying those at risk of dying within 1 year.

Meaning These results suggest that the prediction model and nomogram developed for this study can be used by clinicians to identify patients who may benefit from palliative care and advance care planning.

Abstract

Importance To optimize palliative care in patients with cancer who are in their last year of life, timely and accurate prognostication is needed. However, available instruments for prognostication, such as the surprise question (“Would I be surprised if this patient died in the next year?”) and various prediction models using clinical variables, are not well validated or lack discriminative ability.

Objective To develop and validate a prediction model to calculate the 1-year risk of death among patients with advanced cancer.

Design, Setting, and Participants This multicenter prospective prognostic study was performed in the general oncology inpatient and outpatient clinics of 6 hospitals in the Netherlands. A total of 867 patients were enrolled between June 2 and November 22, 2017, and followed up for 1 year. The primary analyses were performed from October 9 to 25, 2019, with the most recent analyses performed from June 19 to 22, 2022. Cox proportional hazards regression analysis was used to develop a prediction model including 3 categories of candidate predictors: clinician responses to the surprise question, patient clinical characteristics, and patient laboratory values. Data on race and ethnicity were not collected because most patients were expected to be of White race and Dutch ethnicity, and race and ethnicity were not considered as prognostic factors. The models’ discriminative ability was assessed using internal-external validation by study hospital and measured using the C statistic. Patients 18 years and older with locally advanced or metastatic cancer were eligible. Patients with hematologic cancer were excluded.

Main Outcomes and Measures The risk of death by 1 year.

Results Among 867 patients, the median age was 66 years (IQR, 56-72 years), and 411 individuals (47.4%) were male. The 1-year mortality rate was 41.6% (361 patients). Three prediction models with increasing complexity were developed: (1) a simple model including the surprise question, (2) a clinical model including the surprise question and clinical characteristics (age, cancer type prognosis, visceral metastases, brain metastases, Eastern Cooperative Oncology Group performance status, weight loss, pain, and dyspnea), and (3) an extended model including the surprise question, clinical characteristics, and laboratory values (hemoglobin, C-reactive protein, and serum albumin). The pooled C statistic was 0.69 (95% CI, 0.67-0.71) for the simple model, 0.76 (95% CI, 0.73-0.78) for the clinical model, and 0.78 (95% CI, 0.76-0.80) for the extended model. A nomogram and web-based calculator were developed to support clinicians in adequately caring for patients with advanced cancer.

Conclusions and Relevance In this study, a prediction model including the surprise question, clinical characteristics, and laboratory values had better discriminative ability in predicting death among patients with advanced cancer than models including the surprise question, clinical characteristics, or laboratory values alone. The nomogram and web-based calculator developed for this study can be used by clinicians to identify patients who may benefit from palliative care and advance care planning. Further exploration of the feasibility and external validity of the model is needed.

Introduction

Palliative care aims to optimize the quality of life among both patients who are in the last phase of life and their relatives.1,2 High-quality and patient-centered palliative care is supported by timely advance care planning.3,4 Palliative care and advance care planning can be facilitated by adequate prognostication (ie, making predictions about a patient’s remaining life expectancy).5 Prognostication may be based on clinicians’ subjective predictions, objective predictors or prediction models, or both.

The surprise question (“Would I be surprised if this patient died in the next year?”) is a well-known tool to support clinician prediction of the survival of patients with advanced illness. It is a generic non–disease-specific tool that is recommended to identify patients with palliative care needs.6 The surprise question alone has been studied as a predictor of death within 1 year among patients with cancer and found to have a sensitivity of 75% and specificity of 90%.7 Those findings suggest that the surprise question is suitable for identifying patients with cancer who will live beyond 1 year but less suitable for identifying those who are going to die within 1 year. Mudge et al8 attempted to improve the prognostic performance of the surprise question for 1-year mortality among hospital inpatients by combining the surprise question with indicators of functional deterioration.9 These indicators included general deterioration (eg, declining functional performance status, weight loss, or repeated unplanned hospital admissions) and clinical indicators for specific advanced diseases (eg, functional ability deteriorating due to progressive cancer or heart failure or extensive untreatable coronary artery disease, with breathlessness or chest pain at rest or on minimal effort). The surprise question combined with general and disease-specific indicators had higher accuracy in predicting death within 1 year than the surprise question alone (81.3% vs 62.0%).8

Cancer-specific prediction models, such as the Palliative Prognostic Score and the Palliative Prognostic Index, have been widely studied and validated for predicting whether patients are in the last months, weeks, or days of life.10 However, few studies have investigated predictors or prediction models for the last year of life. A review by Owusuaa et al11 summarized predictors of death within 3 months to 2 years; these predictors included age, sex, Eastern Cooperative Oncology Group (ECOG) performance status, brain metastases, visceral metastases, and cutaneous or subcutaneous metastases. Prediction models consisting of 1 or more predictors (eg, the Oncological-Multidimensional Prognostic Index) identified in this review did not include any form of clinician prediction of survival. Furthermore, those models had moderate discrimination abilities (C statistic or area under the curve of 0.60-0.70) or were not well (ie, externally) validated.11

It is well established that prognostication is most accurate when clinician predictions of survival are combined with clinical predictors.12 However, little is known about that combination for the prediction of death within 1 year in patients with cancer. Therefore, we aimed to develop and validate a model to calculate the 1-year risk of death for patients with advanced cancer.

Methods

Patients and Procedures

The protocol for this prognostic study was reviewed and approved by the medical ethical research committee of Erasmus MC, Erasmus University Medical Center, Rotterdam. The study protocol was also approved by the other study hospitals. All eligible patients were informed about the study in writing, and written or oral informed consent (depending on the procedure of the study hospital) was obtained from all patients. The collected data were analyzed anonymously. This study followed the Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD) reporting guideline for prognostic studies.13

Patients eligible for inclusion were 18 years or older, had locally advanced or metastatic cancer, and were receiving treatment with palliative intent, with or without anticancer treatment. A total of 867 patients (847 from outpatient clinics) were prospectively and consecutively enrolled from both the general oncology inpatient clinics and the outpatient clinics of 6 hospitals in the Netherlands (Erasmus MC, Ikazia Hospital Rotterdam, Maasstad Hospital Rotterdam, Amphia, Van Weel Bethesda Hospital, and Admiraal de Ruyter Hospital) from June 2 to November 22, 2017 (eBox 1 in Supplement 1). Patients with hematologic cancer were excluded. Medical specialists, residents, and nurse practitioners from the study hospitals enrolled eligible patients consecutively based on the 3 inclusion criteria and 1 exclusion criterion, which were outlined on a poster in every consultation room and clinic. The primary analyses were performed from October 9 to 25, 2019, with the most recent analyses performed from June 19 to 22, 2022.

A total of 17 candidate predictors of death were selected based on the findings of a systematic review and meta-analysis.11 These predictors were categorized as follows: (1) clinician responses to the surprise question (“Would I be surprised if this patient died in the next year?”7); (2) patient clinical characteristics, including age, sex, comorbidity, cancer type, metastases (visceral [including liver, pancreas, peritoneal, or pleural and excluding lung], brain, and cutaneous or subcutaneous), ECOG performance status, food intake, weight loss, pain, dyspnea, and fatigue; and (3) patient laboratory values, including hemoglobin, C-reactive protein (CRP), and serum albumin (eBox 2 in Supplement 1). Data on race and ethnicity were not collected because most patients were expected to be of White race and Dutch ethnicity, and race and ethnicity were not considered as prognostic factors. Before this study began, the clinical feasibility of collecting information on these predictors was evaluated in a focus group comprising oncologists and other clinicians.

Variables were gathered on the day of inclusion via a questionnaire, which was completed by the medical specialist, resident, or nurse practitioner who was treating the patient. The questionnaire included the surprise question and items about the patient’s current performance status, which was assessed according to the ECOG classification system (range, 0-4, with 0 indicating no performance restrictions and 4 indicating totally confined to bed or chair)14; the patient’s current food intake (normal, mildly reduced, or severely reduced), which was evaluated by asking the patient; the patient’s average pain during the previous week, which was assessed using an 11-point numerical rating scale (range, 0-10, with 0 indicating no pain and 10 indicating the worst pain possible)15; the patient’s level of dyspnea (range, 0-4, with 0 indicating no dyspnea and 4 indicating life-threatening dyspnea) and level of fatigue (range, 0-3, with 0 indicating no fatigue and 3 indicating fatigue that limits self-care activities of daily living), which were assessed according to the Common Terminology Criteria for Adverse Events, version 4.016; and the patient’s weight loss, which was assessed by asking the patient about total weight loss in the previous 6 months. Research assistants obtained information on other variables from patients’ medical records. Comorbidity was assessed using the Charlson Comorbidity Index.17

For the laboratory parameters, the most recent test results from the month before study inclusion were collected. Types of cancer were classified based on literature18,19 as those having (1) a good prognosis (mean survival of 40 months) for breast, prostate, or thyroid cancer or (2) an intermediate or poor prognosis (mean survival of 10-24 months) for all other cancer types. The sample size was estimated at 430 patients, which was based on the expected mortality rate (40%) and the number of expected deaths (170) in relation to the total number of predictors (17).11,20 All patients were followed up for a maximum of 1 year, and information about their vital status (ie, alive or dead) was obtained from medical records. When it was unclear whether the patient was still alive, the patient’s general practitioner was contacted by telephone. For patients who died during follow-up, the date of death was recorded.

Statistical Analysis

The primary outcome was the probability of death by 1 year. The prognostic performance of the surprise question in predicting death within 1 year was assessed. Possible nonlinear associations between the risk of death and continuous predictors were investigated using restricted cubic splines. If there was evidence of a nonlinear association, a suitable transformation was chosen to approximate the spline. Cox proportional hazards regression analysis was used to develop a prediction model by applying backward selection using a liberal P value (P < .20). We assumed that missing values were missing at random; multiple imputation was used to impute missing values 10 times. The results from analyses of these imputed data sets were pooled using Rubin rules.21 We performed sensitivity analyses to examine the possible impact of the violation of missing at random.

The prediction model was validated through internal-external validation to evaluate heterogeneity in model performance across the study hospitals.22 In this validation, the prediction model was refitted with data from all study hospitals except 1, and the resulting model was validated with the data from the hospital not included at model development. This procedure was repeated until each hospital was used once for validation. We used the Harrell C statistic to evaluate the ability of the prediction model to discriminate between patients who died vs patients who lived longer during the follow-up period. The C statistic ranges from 0.5 to 1.0, with 0.5 indicating that a model yields prognostic results equivalent to a coin toss and 1.0 indicating that a model has perfect prognostic discrimination. In addition, we assessed the calibration of the model during internal-external validation using calibration plots.

We also simplified the prediction model into a nomogram, and we created a web-based calculator to calculate the probability of death within 1 year. All statistical analyses were performed using R statistical software, version 3.6.0 (R Foundation for Statistical Computing). Missing values were imputed using the mice package for R software. The web-based calculator was developed using the Shiny package for R software. The threshold for statistical significance was 2-sided P = .20.

Results

Among 867 patients, the median age was 66 years (IQR, 56-72 years); 456 patients (52.6%) were female, and 411 patients (47.4%) were male. Most patients (476 individuals [54.9%]) were enrolled at 1 university-affiliated tertiary hospital (Erasmus MC). The most common cancer types were breast (191 patients [22.0%]), lung (173 patients [20.0%]), and gastrointestinal (132 patients [15.2%]) (Table 1; eTable 1 in Supplement 1). Of all cancer types, 595 (68.6%) had an intermediate or poor prognosis according to data from the literature.18,19 The 1-year mortality rate within the whole cohort was 41.6% (361 patients) (Figure 1). Although there were no missing data on vital status, we had to contact the patient’s general practitioner to ascertain the outcomes of 77 patients (8.9%). The 1-year survival probability was 82% among patients for whom clinicians answered yes to the surprise question and 37% among patients for whom clinicians answered no to the surprise question.

The surprise question was answered mainly by attending medical specialists (767 patients [88.5%]), followed by nurse practitioners (55 patients [6.3%]) and residents (45 patients [5.2%]). There were no significant differences between these groups in the prognostic accuracy of the prediction of 1-year death. The surprise question had a hazard ratio (HR) of 5.42 (95% CI, 5.27-7.16) when answered by specialists, 3.86 (95% CI, 3.50-10.24) when answered by nurse practitioners, and 7.32 (95% CI, 6.48-24.66) when answered by residents (P = .61). Overall, the surprise question had a sensitivity of 80% (95% CI, 75%-84%), specificity of 68% (95% CI, 90%-95%), positive predictive value (PPV) of 64% (95% CI, 60%-68%), and negative predictive value (NPV) of 82% (95% CI, 79%-86%). In the univariable regression model, all variables (except for the presence of cutaneous or subcutaneous metastases) were associated with death within 1 year, with the highest risk observed for a clinician answer of no to the surprise question (HR, 5.49; 95% CI, 4.22-7.13), an ECOG performance status of 2 or higher (HR, 4.67; 95% CI, 3.47-6.27), and a fatigue grade of 2 or higher (HR, 4.29; 95% CI, 3.12-5.91)(Table 2).

In the multivariable analyses, we developed a prediction model for death within 1 year by increasing complexity, starting with the surprise question, which performed best in the univariable model. Three versions of the prediction model were developed: (1) a simple model including the surprise question only, (2) a clinical model including the surprise question and clinical characteristics (age, cancer type prognosis, visceral metastases, brain metastases, ECOG performance status, weight loss, pain, and dyspnea), and (3) an extended model including the surprise question, clinical characteristics, and laboratory values (hemoglobin, CRP, and serum albumin) (Table 3). The pooled C statistic was 0.69 (95% CI, 0.67-0.71) for the simple model, 0.76 (95% CI, 0.73-0.78) for the clinical model, and 0.78 (95% CI, 0.76-0.80) for the extended model (eTable 2 and eTable 3 in Supplement 1). At a uniform predefined 40% threshold for the risk of death, the clinical model had a sensitivity of 80%, specificity of 69%, PPV of 65%, and NPV of 83%. At this threshold, the extended model had a sensitivity of 76%, specificity of 72%, PPV of 66%, and NPV of 81%. The clinical and extended models had good calibration (eFigures 1 and 2 in Supplement 1).

Additional analyses yielded a C statistic of 0.70 (95% CI, 0.68-0.73) for clinical characteristics alone, 0.71 (95% CI, 0.68-0.74) for laboratory values alone, and 0.77 (95% CI, 0.74-0.79) for the surprise question combined with laboratory values (eTable 3 in Supplement 1). Additional sensitivity analyses for CRP with a high percentage of missing values (42.1%) revealed no differences between imputed and complete-case analyses (eFigure 3 in Supplement 1).

A nomogram, which calculated the 1-year risk of death based on individual variables, was developed for the simple model (eFigure 4 in Supplement 1), clinical model (eFigure 5 in Supplement 1), and extended model (Figure 2). A web-based calculator based on the models was also created.23 A sample calculation of 1-year risk of death based on 1 patient is shown in eBox 3 in Supplement 1.

This multicenter prospective prognostic study aimed to develop and perform internal-external validation on a prediction model for death by 1 year in patients with advanced cancer. We found that the extended model (including the surprise question, clinical characteristics, and laboratory values) had better discrimination ability than the simple model (including the surprise question only) or the clinical model (including the surprise question and clinical characteristics). However, the discriminative abilities of the clinical and extended models were relatively similar (C statistics of 0.76 and 0.78, respectively). The extended model developed in our study also had better discrimination than most other models in the literature.11,24 In addition, the clinical model had better discriminative ability than the simple model. Based on these results, our study confirmed previous findings that clinical and laboratory factors add to clinician prediction of survival using the surprise question.12

The development of an easy-to-use nomogram and web-based calculator for the clinical and extended models was novel and allowed for the calculation of 1-year risk of death in individual patients in clinical practice. Clinicians can choose to use the simple, clinical, or extended nomogram based on available patient information. However, the use of the clinical or extended model requires more variables than the 1-sentence surprise question. Because patients’ clinical characteristics may be easier to obtain than laboratory values, which require additional blood tests, clinician use of the extended model may be limited. Although the extended model best predicted the 1-year risk of death, the clinical model may be a good alternative due to the models’ similarities in discrimination (C statistics of 0.76 for the clinical model and 0.78 for the extended model).

The nomogram could be made visible in the patient’s electronic medical records and serve as a reminder for clinicians (both physicians and nurses) to be aware of patients who are at risk of dying within 1 year or could be implemented as part of a digital advance care planning program. The nomogram can support clinicians in initiating conversations with patients who may be in the last period of their lives and can thereby support advance care planning. The nomogram could also be an aid in making decisions about anticancer treatments in the last year of life. The interpretation of the calculated risk of death will need further research. We have yet to establish the threshold for risk of death that clinicians (should) feel comfortable using to communicate to patients that they may be in the last period of their lives, which could help to better tailor treatment decisions to quality of life.

Previous research25 has found that a machine learning algorithm, which used 559 features as inputs and was integrated into the electronic medical file, could accurately predict death within 180 days in patients with cancer. The study reported an area under the curve of 0.89,25 which outperformed our extended model. Of note, clinicians are already aware of the variables included in our model, whereas this awareness may not be the case with the variables included in a machine learning algorithm. Thus, it will be important to further assess support among health care professionals for the various types of prognostic models (a simple model, our model including multiple components, or a machine learning algorithm) in routine clinical practice.

In our study, the surprise question had higher sensitivity (80%) and higher PPV (64%) than previously reported (77% and 41%, respectively).7,26 The surprise question in our study was answered by clinicians in a hospital setting and applied to patients with advanced cancer, whereas other studies have often involved clinicians in the primary care setting and patients with all cancer stages. The surprise question may be easier to answer for patients with advanced cancer who typically have a worse prognosis than patients with other cancer stages. In addition, in contrast to previous findings,27 there were no significant differences between medical specialists and nurses with regard to the prognostic accuracy of the surprise question in predicting death within 1 year. The nurse practitioners in our study had more responsibility to assess and make decisions about the care of patients than nurses in a previous study,27 who seemed to be mainly involved in administering chemotherapy. Therefore, nurse practitioners may have expertise in answering the surprise question that is similar to that of medical specialists.

Limitations

This study has several limitations. First, clinicians in our study enrolled eligible patients by completing a questionnaire. Although the 3 inclusion criteria and 1 exclusion criterion were clear, some bias in clinicians’ selection of patients cannot be ruled out. Second, the responses to the surprise question and information about other patient variables were collected within 1 questionnaire, which might have influenced clinicians’ responses to the surprise question. Third, the percentage of missing values for CRP is relatively high (42.1%), but additional sensitivity analyses revealed no differences between imputed and complete-case analyses. Fourth, due to the relatively high mortality rate in the selected patients, the nomogram might overestimate the risk of death in patients with advanced cancer types that have better overall survival (eg, breast cancer). However, cancer type prognosis was included as a variable in the model to neutralize this possible risk. Fifth, 54.9% of patients were enrolled at 1 hospital, which is the only participating university hospital (ie, tertiary hospital). In addition, although the internal-external validation of our model supports its external validity, it will be important to test the generalizability of our model by performing independent external validation using another data set. Sixth, our model may require regular updates due to developments in treatment options (eg, targeted therapy) or survival shifts in cancer care.

Conclusions

This prognostic study found that a prediction model and nomogram including the surprise question, clinical characteristics (age, cancer type prognosis, visceral metastases, brain metastases, ECOG performance status, weight loss, pain, and dyspnea), and laboratory values (hemoglobin, CRP, and serum albumin) can support clinicians in more accurately identifying patients who are at risk of dying within 1 year. Further research on the nomogram should focus on external validation, feasibility, and its use for the initiation of advance care planning discussions with patients and relatives, which may aid in decision-making about desired care and medical treatment in the last period of patients’ lives.

Back to top

Article Information

Accepted for Publication: October 11, 2022.

Published: November 30, 2022. doi:10.1001/jamanetworkopen.2022.44350

Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2022 Owusuaa C et al. JAMA Network Open.

Corresponding Author: Catherine Owusuaa, Department of Medical Oncology, Erasmus MC Cancer Institute, Dr. Molewaterplein 40, Rotterdam, 3015 GD, the Netherlands (c.owusuaa@erasmusmc.nl).

Author Contributions: Dr van der Rijt had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Concept and design: Owusuaa, Drooger, Dietvorst, van der Heide, van der Rijt.

Acquisition, analysis, or interpretation of data: All authors.

Drafting of the manuscript: Owusuaa, Dietvorst, van der Heide, van der Rijt.

Critical revision of the manuscript for important intellectual content: van der Padt-Pruijsten, Drooger, Heijns, Dietvorst, Janssens-van Vliet, Nieboer, Aerts, van der Heide, van der Rijt.

Statistical analysis: Owusuaa, Nieboer.

Obtained funding: van der Rijt.

Administrative, technical, or material support: Drooger, Heijns.

Supervision: van der Padt-Pruijsten, Drooger, Dietvorst, Janssens-van Vliet, van der Heide, van der Rijt.

Conflict of Interest Disclosures: Dr Aerts reported receiving grants from AstraZeneca and Bristol-Myers Squibb; personal fees from Amphera, AstraZeneca, Bristol-Myers Squibb, Eli Lilly and Company, Merck Sharp & Dohme, and Takeda Pharmaceutical Company; and being a shareholder of Amphera outside the submitted work. Dr van der Rijt reported receiving grants from the Netherlands Organization for Health Research and Development during the conduct of the study and personal fees from Kyowa Kirin Co (via her institution) outside the submitted work. No other disclosures were reported.

Funding/Support: This study was funded by grant 844001209 from the Netherlands Organization for Health Research and Development (Dr van der Rijt).

Role of the Funder/Sponsor: The funder had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Data Sharing Statement: See Supplement 2.

Additional Contributions: We thank all study participants, medical specialists, residents, and nurse practitioners for contributing to the study. Thank you to the following contributors from the study hospitals for their administrative support in setting up the study and in patient enrollment: Liesbeth Struik, MSc, of Ikazia Hospital; Bregje van Kolck, RN, of Erasmus MC; Coen van Leijen, RN, of Maasstad Hospital; Nicole van Sluijs-Tonglet, RN, of Admiraal de Ruyter Hospital; Jacqueline Kaljouw, RN, of Van Weel Bethesda Hospital; and Ingrid van den Bos, RN, and Ria Habraken, RN, of Amphia. We thank Nelly van der Meer-van der Velden, MSc, at the Clinical Trial Center of Erasmus MC, for developing the database for data management. Thank you to Corry Leunis-de Ruiter, RN, and Hanneke van Embden-van Donk, RN, of Ikazia Hospital for help with the data collection at Ikazia Hospital. Special thanks to Irene van Beelen, MSc, of Erasmus University Rotterdam for assistance in data collection in the study hospitals during patient enrollment and at 1-year follow-up. These contributors received no financial compensation for their assistance outside of their normal salaries.

References

1.

Ahmedzai SH, Costa A, Blengini C, et al; International Working Group Convened by the European School of Oncology. A new international framework for palliative care. Eur J Cancer. 2004;40(15):2192-2200. doi:10.1016/j.ejca.2004.06.009 PubMedGoogle ScholarCrossref

2.

Jordan K, Aapro M, Kaasa S, et al. European Society for Medical Oncology (ESMO) position paper on supportive and palliative care. Ann Oncol. 2018;29(1):36-43. doi:10.1093/annonc/mdx757 PubMedGoogle ScholarCrossref

3.

Rietjens JAC, Sudore RL, Connolly M, et al; European Association for Palliative Care. Definition and recommendations for advance care planning: an international consensus supported by the European Association for Palliative Care. Lancet Oncol. 2017;18(9):e543-e551. doi:10.1016/S1470-2045(17)30582-X PubMedGoogle ScholarCrossref

4.

Rome RB, Luminais HH, Bourgeois DA, Blais CM. The role of palliative care at the end of life. Ochsner J. 2011;11(4):348-352.PubMedGoogle Scholar

5.

Glare PA, Sinclair CT. Palliative medicine review: prognostication. J Palliat Med. 2008;11(1):84-103. doi:10.1089/jpm.2008.9992 PubMedGoogle ScholarCrossref

6.

Downar J, Goldman R, Pinto R, Englesakis M, Adhikari NKJ. The “surprise question” for predicting death in seriously ill patients: a systematic review and meta-analysis. CMAJ. 2017;189(13):E484-E493. doi:10.1503/cmaj.160775 PubMedGoogle ScholarCrossref

7.

Moss AH, Lunney JR, Culp S, et al. Prognostic significance of the “surprise” question in cancer patients. J Palliat Med. 2010;13(7):837-840. doi:10.1089/jpm.2010.0018 PubMedGoogle ScholarCrossref

8.

Mudge AM, Douglas C, Sansome X, et al. Risk of 12-month mortality among hospital inpatients using the surprise question and SPICT criteria: a prospective study. BMJ Support Palliat Care. 2018;8(2):213-220. doi:10.1136/bmjspcare-2017-001441 PubMedGoogle ScholarCrossref

9.

Supportive and Palliative Care Indicators Tool (SPICT). University of Edinburgh; 2019. Updated 2022. Accessed July 17, 2022. https://www.spict.org.uk/the-spict/

10.

Hui D. Prognostication of survival in patients with advanced cancer: predicting the unpredictable? Cancer Control. 2015;22(4):489-497. doi:10.1177/107327481502200415 PubMedGoogle ScholarCrossref

11.

Owusuaa C, Dijkland SA, Nieboer D, van der Heide A, van der Rijt CCD. Predictors of mortality in patients with advanced cancer—a systematic review and meta-analysis. Cancers (Basel). 2022;14(2):328. doi:10.3390/cancers14020328 PubMedGoogle ScholarCrossref

12.

Hui D, Park M, Liu D, et al. Clinician prediction of survival versus the Palliative Prognostic Score: which approach is more accurate? Eur J Cancer. 2016;64:89-95. doi:10.1016/j.ejca.2016.05.009 PubMedGoogle ScholarCrossref

13.

Collins GS, Reitsma JB, Altman DG, Moons KGM. Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD): the TRIPOD statement. BMJ. 2015;350:g7594. doi:10.1136/bmj.g7594 PubMedGoogle ScholarCrossref

14.

Oken MM, Creech RH, Tormey DC, et al. Toxicity and response criteria of the Eastern Cooperative Oncology Group. Am J Clin Oncol. 1982;5(6):649-655. doi:10.1097/00000421-198212000-00014 PubMedGoogle ScholarCrossref

15.

Williamson A, Hoggart B. Pain: a review of three commonly used pain rating scales. J Clin Nurs. 2005;14(7):798-804. doi:10.1111/j.1365-2702.2005.01121.x PubMedGoogle ScholarCrossref

16.

Neo HY, Xu HY, Wu HY, Hum A. Prediction of poor short-term prognosis and unmet needs in advanced chronic obstructive pulmonary disease: use of the two-minute walking distance extracted from a six-minute walk test. J Palliat Med. 2017;20(8):821-828. doi:10.1089/jpm.2016.0449 PubMedGoogle ScholarCrossref

17.

Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373-383. doi:10.1016/0021-9681(87)90171-8 PubMedGoogle ScholarCrossref

18.

Katagiri H, Takahashi M, Wakai K, Sugiura H, Kataoka T, Nakanishi K. Prognostic factors and a scoring system for patients with skeletal metastasis. J Bone Joint Surg Br. 2005;87(5):698-703. doi:10.1302/0301-620X.87B5.15185 PubMedGoogle ScholarCrossref

19.

Tomita K, Kawahara N, Kobayashi T, Yoshida A, Murakami H, Akamaru T. Surgical strategy for spinal metastases. Spine (Phila Pa 1976). 2001;26(3):298-306. doi:10.1097/00007632-200102010-00016 PubMedGoogle ScholarCrossref

20.

Peduzzi P, Concato J, Kemper E, Holford TR, Feinstein AR. A simulation study of the number of events per variable in logistic regression analysis. J Clin Epidemiol. 1996;49(12):1373-1379. doi:10.1016/S0895-4356(96)00236-3 PubMedGoogle ScholarCrossref

21.

van Buuren S. Flexible Imputation of Missing Data. 1st ed. Chapman and Hall/CRC; 2012.

22.

Steyerberg EW, Harrell FE Jr. Prediction models need appropriate internal, internal-external, and external validation. J Clin Epidemiol. 2016;69:245-247. doi:10.1016/j.jclinepi.2015.04.005 PubMedGoogle ScholarCrossref

23.

Nieboer D. Prediction model for patients with advance disease in oncology. Shinyapps. October 25, 2019. Accessed October 27, 2022. https://dnieboer.shinyapps.io/nomogram

24.

Proctor MJ, Morrison DS, Talwar D, et al. A comparison of inflammation-based prognostic scores in patients with cancer. a Glasgow Inflammation Outcome study. Eur J Cancer. 2011;47(17):2633-2641. doi:10.1016/j.ejca.2011.03.028 PubMedGoogle ScholarCrossref

25.

Manz CR, Chen J, Liu M, et al. Validation of a machine learning algorithm to predict 180-day mortality for outpatients with cancer. JAMA Oncol. 2020;6(11):1723-1730. doi:10.1001/jamaoncol.2020.4331 PubMedGoogle ScholarCrossref

26.

White N, Kupeli N, Vickerstaff V, Stone P. How accurate is the ‘surprise question’ at identifying patients at the end of life? a systematic review and meta-analysis. BMC Med. 2017;15(1):139. doi:10.1186/s12916-017-0907-4 PubMedGoogle ScholarCrossref

27.

Lefkowits C, Chandler C, Sukumvanich P, et al. Validation of the ‘surprise question’ in gynecologic oncology: comparing physicians, advanced practice providers, and nurses. Gynecol Oncol. 2016;141 (suppl 1):128. doi:10.1016/j.ygyno.2016.04.339 Google ScholarCrossref

Clinical Prediction Model for 1-Year Mortality in Patients With Advanced Cancer (2024)

FAQs

What is the PROgnostic model for advanced cancer? ›

Conclusions PROgnostic Model for Advanced Cancer (PRO-MAC) takes into account patient and disease-related factors and identify high-risk patients with 90-day mortality.

What is a mortality prediction model? ›

In contrast to causal estimates, mortality prediction models are mapping inputs (or 'features') to a chosen outcome, such as mortality. They might help estimate if a patient is at a higher risk of death, but they offer little help in making the best decision in that scenario.

What are prediction models in clinical practice? ›

Clinical prediction models (CPMs) are statistical models or algorithms that use a set of predictor variables to calculate an individual's chance of developing or having a certain condition, and thus aid clinicians with the associated clinical reasoning and decision-making [1].

What is the single best predictive factor in determining prognosis in patient with metastatic cancer referred to palliative care? ›

Performance Status The single most important predictive factor in cancer is Performance Status ('functional ability,' 'functional status'): a measure of how much a patient can do for themselves, their activity and energy level.

What is predictive model for prognosis? ›

For prognostic prediction models, the focus is on predicting a future health outcome that occurs after the moment of prediction, also using predictors available at the moment of prediction. The prediction horizon – how far ahead in time the model aims to predict outcome occurrence by – needs to be established.

What is the most important prognostic factor in cancer? ›

The most important prognostic factor in all human cancers is the stage at presentation, which is the anatomic extent of the disease.

How do you calculate predicted mortality? ›

Age by sex (and sometimes race) specific rates for the comparison population are multiplied by the local population counts or estimates, cell by cell, and summed to yield expected deaths. Actual (observed) deaths are then divided by the expected deaths to give the ratio.

What does prediction model mean? ›

Predictive models analyze past performance to assess how likely a customer is to exhibit a specific behavior in the future. This category also encompasses models that seek out subtle data patterns to answer questions about customer performance, such as fraud detection models.

What is the most important predictor of overall mortality? ›

The 10 factors associated with the greatest risk of mortality over the study period were current or previous history as a smoker (HR = 1.91, 95% CI = 1.70, 2.14 and HR = 1.32, 95% CI = 1.22, 1.43, respectively), history of divorce (HR = 1.44, 95% CI = 1.31, 1.60), history of alcohol abuse (HR = 1.36, 95% CI = 1.14, ...

What are three 3 examples of predictive models? ›

  • Regression. Regression models are used to predict a continuous numerical value based on one or more input variables. ...
  • Neural Network. Neural network models are a type of predictive modeling technique inspired by the structure and function of the human brain. ...
  • Classification. ...
  • Clustering. ...
  • Time series. ...
  • Decision Tree. ...
  • Ensemble.

Which is the best prediction model? ›

The most widely used predictive models are:
  • Decision trees: Decision trees are a simple, but powerful form of multiple variable analysis. ...
  • Regression (linear and logistic) Regression is one of the most popular methods in statistics. ...
  • Neural networks.

How to establish clinical prediction models? ›

Five key steps are involved: obtaining a suitable dataset, making outcome predictions, evaluating predictive performance, assessing clinical usefulness, and clearly reporting findings.

What is the best prognostic cancer? ›

Although there are no curable cancers, melanoma, Hodgkin lymphoma, and breast, prostate, testicular, cervical, and thyroid cancer have some of the highest 5-year relative survival rates. Cancer is a disease that causes cells to grow and multiply uncontrollably in certain parts of the body.

How accurate is a 6 month prognosis? ›

Corresponding positive and negative predictive values were 23.1%–64.1% and 85.3%–94.5%. Over 50% of patients with estimated six-month mortality risk ≥30% died within 12 months.

What is the difference between advanced and metastatic cancer? ›

But other locally advanced cancers, such as some prostate cancers, may be cured. Metastatic cancers have spread from where they started to other parts of the body. Cancers that have spread are often thought of as advanced when they can't be cured or controlled with treatment.

Prediction models in cancer care - PMC - NCBINational Institutes of Health (NIH) (.gov)https://www.ncbi.nlm.nih.gov ›

Indeed, oncology is often primarily a prediction problem: many of the early stage cancers cause no symptoms, and treatment is recommended because of a predictio...
These personalised estimates offer the potential to improve treatment selection and with it, increase efficiency, patient satisfaction, quality of life and pote...
This systematic review and meta-analysis describes the predictors of mortality in patients with advanced cancer. The results indicate that disease stage, lung c...

What is a prognostic model? ›

Prognostic models are mathematical models that relate a person's characteristics now to the risk of a particular future outcome. Prognostic models can take into account one or many current characteristics (multivariable).

What is predictive vs prognostic model? ›

Clinical prediction models usually fall within one of two major categories: diagnostic prediction models that estimate an individual's probability of a specific health condition (often a disease) being currently present, and prognostic prediction models that estimate the probability of developing a specific health ...

What is the four step cancer progression model? ›

Multistage chemical carcinogenesis can be conceptually divided into four stages: tumor initiation, tumor promotion, malignant conversion, and tumor progression.

What is the Encals prognostic model? ›

The ENCALS survival prediction model offers patients with amyotrophic lateral sclerosis (ALS) the opportunity to receive a personalized prognosis of survival at the time of diagnosis.

Top Articles
Latest Posts
Article information

Author: Otha Schamberger

Last Updated:

Views: 6169

Rating: 4.4 / 5 (55 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Otha Schamberger

Birthday: 1999-08-15

Address: Suite 490 606 Hammes Ferry, Carterhaven, IL 62290

Phone: +8557035444877

Job: Forward IT Agent

Hobby: Fishing, Flying, Jewelry making, Digital arts, Sand art, Parkour, tabletop games

Introduction: My name is Otha Schamberger, I am a vast, good, healthy, cheerful, energetic, gorgeous, magnificent person who loves writing and wants to share my knowledge and understanding with you.