Patient safety can be increased by improving the organization of care. A tool that evaluates the actual organization of care, as perceived by multidisciplinary teams, is the Care Process Self-Evaluation Tool (CPSET). CPSET was developed in 2007 and includes 29 items in five subscales: (a) patient-focused organization, (b) coordination of the care process, (c) collaboration with primary care, (d) communication with patients and family, and (e) follow-up of the care process. The goal of the present study was to further evaluate the psychometric properties of the CPSET at the team and hospital levels and to compile a cutoff score table.
Effective interprofessional teamwork is an essential component for the delivery of high-quality patient care in an increasingly complex medical environment. The objective is to evaluate whether the implementation of care pathways (CPs) improves teamwork in an acute hospital setting.
The objective of this study was to verify how valid misclassification measurements obtained from a pre-survey calibration exercise are by comparing them to validation scores obtained in field conditions. Validation data were collected from the Smile for Life project, an oral health intervention study in Flemish children. A calibration exercise was organized under pre-survey conditions (32 age-matched children examined by eight examiners and the benchmark scorer). In addition, using a pre-determined sampling scheme blinded to the examiners, the benchmark scorer re-examined between six and 11 children screened by each of the dentists during the survey. Factors influencing sensitivity and specificity for scoring caries experience (CE) were investigated, including examiner, tooth type, surface type, tooth position (upper/lower jaw, right/left side) and validation setting (pre-survey versus field). In order to account for the clustering effect in the data, a generalized estimating equations approach was applied. Sensitivity scores were influenced not only by the calibration setting (lower sensitivity in field conditions, p?0.01), but also by examiner, tooth type (lower sensitivity in molar teeth, p?0.01) and tooth position (lower sensitivity in the lower jaw, p?0.01). Factors influencing specificity were examiner, tooth type (lower specificity in molar teeth, p?0.01) and surface type (the occlusal surface with a lower specificity than other surfaces) but not the validation setting. Misclassification measurements for scoring CE are influenced by several factors. In this study, the validation setting influenced sensitivity, with lower scores obtained when measuring data validity in field conditions. Results obtained in a pre-survey calibration setting need to be interpreted with caution and do not (always) reflect the actual performance of examiners during the field work.
Data obtained from calibration exercises are used to assess the level of agreement between examiners (and the benchmark examiner) and/or between repeated examinations by the same examiner in epidemiological surveys or large-scale clinical studies. Agreement can be measured using different techniques: kappa statistic, percentage agreement, dice coefficient, sensitivity and specificity. Each of these methods shows specific characteristics and has its own shortcomings. The aim of this contribution is to critically review techniques for the measurement and analysis of examiner agreement and to illustrate this using data from a recent survey in young children, the Smile for Life project. The above-mentioned agreement measures are influenced (in differing ways and extents) by the unit of analysis (subject, tooth, surface level) and the disease level in the validation sample. These effects are more pronounced for percentage agreement and kappa than for sensitivity and specificity. It is, therefore, important to include information on unit of analysis and disease level (in validation sample) when reporting agreement measures. Also, confidence intervals need to be included since they indicate the reliability of the estimate. When dependency among observations is present [as is the case in caries experience data sets with typical hierarchical structure (surface-tooth-subject)], this will influence the width of the confidence interval and should therefore not be ignored. In this situation, the use of multilevel modelling is necessary. This review clearly shows that there is a need for the development of guidelines for the measurement, interpretation and reporting of examiner reliability in caries experience surveys.
Kappa-like agreement indexes are often used to assess the agreement among examiners on a categorical scale. They have the particularity of correcting the level of agreement for the effect of chance. In the present paper, we first define two agreement indexes belonging to this family in a hierarchical context. In particular, we consider the cases of a random and fixed set of examiners. Then, we develop a method to evaluate the influence of factors on these indexes. Agreement indexes are directly related to a set of covariates through a hierarchical model. We obtain the posterior distribution of the model parameters in a Bayesian framework. We apply the proposed approach on dental data and compare it with the generalized estimating equations approach.
Related JoVE Video
Journal of Visualized Experiments
What is Visualize?
JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.
How does it work?
We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.
Video X seems to be unrelated to Abstract Y...
In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.