Article,

Updating methods improved the performance of a clinical prediction model in new patients.

, , , , and .
Journal of clinical epidemiology, 61 (1): 76-86 (January 2008)4731<br/>JID: 8801383; 2006/08/22 received; 2007/03/08 revised; 2007/04/20 accepted; 2007/11/26 aheadofprint; ppublish;<br/>Models predictius; Avaluació de riscs.
DOI: 10.1016/j.jclinepi.2007.04.018

Abstract

OBJECTIVE: Ideally, clinical prediction models are generalizable to other patient groups. Unfortunately, they perform regularly worse when validated in new patients and are then often redeveloped. While the original prediction model usually has been developed on a large data set, redevelopment then often occurs on the smaller validation set. Recently, methods to update existing prediction models with the data of new patients have been proposed. We used an existing model that preoperatively predicts the risk of severe postoperative pain (SPP) to compare five updating methods. STUDY DESIGN AND SETTING: The model was tested and updated with a set of 752 new patients (274 36 with SPP). We studied the discrimination (ability to distinguish between patients with and without SPP) and calibration (agreement between the predicted risks and observed frequencies of SPP) of the five updated models in 283 other patients (100 35% with SPP). RESULTS: Simple recalibration methods improved the calibration to a similar extent as revision methods that made more extensive adjustments to the original model. Discrimination could not be improved by any of the methods. CONCLUSION: When the performance is poor in new patients, updating methods can be applied to adjust the model, rather than to develop a new model.

Tags

Users

  • @jepcastel

Comments and Reviews