JoVE Visualize What is visualize?
Stop Reading. Start Watching.
Advanced Search
Stop Reading. Start Watching.
Regular Search
Find video protocols related to scientific articles indexed in Pubmed.
The Care Process Self-Evaluation Tool: a valid and reliable instrument for measuring care process organization of health care teams.
BMC Health Serv Res
PUBLISHED: 08-15-2013
Show Abstract
Hide Abstract
Patient safety can be increased by improving the organization of care. A tool that evaluates the actual organization of care, as perceived by multidisciplinary teams, is the Care Process Self-Evaluation Tool (CPSET). CPSET was developed in 2007 and includes 29 items in five subscales: (a) patient-focused organization, (b) coordination of the care process, (c) collaboration with primary care, (d) communication with patients and family, and (e) follow-up of the care process. The goal of the present study was to further evaluate the psychometric properties of the CPSET at the team and hospital levels and to compile a cutoff score table.
Related JoVE Video
A multilevel model for spatially correlated binary data in the presence of misclassification: an application in oral health research.
Stat Med
PUBLISHED: 06-07-2013
Show Abstract
Hide Abstract
Dental caries is a highly prevalent disease affecting the tooths hard tissues by acid-forming bacteria. The past and present caries status of a tooth is characterized by a response called caries experience (CE). Several epidemiological studies have explored risk factors for CE. However, the detection of CE is prone to misclassification because some cases are neither clearly carious nor noncarious, and this needs to be incorporated into the epidemiological models for CE data. From a dentists point of view, it is most appealing to analyze CE on the tooths surface, implying that the multilevel structure of the data (surface-tooth-mouth) needs to be taken into account. In addition, CE data are spatially referenced, that is, an active lesion on one surface may impact the decay process of the neighboring surfaces, and that might also influence the process of scoring CE. In this paper, we investigate two hypotheses: that is, (i) CE outcomes recorded at surface level are spatially associated; and (ii) the dental examiners exhibit some spatial behavior while scoring CE at surface level, by using a spatially referenced multilevel autologistic model, corrected for misclassification. These hypotheses were tested on the well-known Signal Tandmobiel® study on dental caries, and simulation studies were conducted to assess the effect of misclassification and strength of spatial dependence on the autologistic model parameters. Our results indicate a substantial spatial dependency in the examiners scoring behavior and also in the prevalence of CE at surface level. Copyright © 2013 John Wiley & Sons, Ltd.
Related JoVE Video
Better interprofessional teamwork, higher level of organized care, and lower risk of burnout in acute health care teams using care pathways: a cluster randomized controlled trial.
Med Care
PUBLISHED: 02-21-2013
Show Abstract
Hide Abstract
Effective interprofessional teamwork is an essential component for the delivery of high-quality patient care in an increasingly complex medical environment. The objective is to evaluate whether the implementation of care pathways (CPs) improves teamwork in an acute hospital setting.
Related JoVE Video
Examiner performance in calibration exercises compared with field conditions when scoring caries experience.
Clin Oral Investig
PUBLISHED: 02-04-2011
Show Abstract
Hide Abstract
The objective of this study was to verify how valid misclassification measurements obtained from a pre-survey calibration exercise are by comparing them to validation scores obtained in field conditions. Validation data were collected from the Smile for Life project, an oral health intervention study in Flemish children. A calibration exercise was organized under pre-survey conditions (32 age-matched children examined by eight examiners and the benchmark scorer). In addition, using a pre-determined sampling scheme blinded to the examiners, the benchmark scorer re-examined between six and 11 children screened by each of the dentists during the survey. Factors influencing sensitivity and specificity for scoring caries experience (CE) were investigated, including examiner, tooth type, surface type, tooth position (upper/lower jaw, right/left side) and validation setting (pre-survey versus field). In order to account for the clustering effect in the data, a generalized estimating equations approach was applied. Sensitivity scores were influenced not only by the calibration setting (lower sensitivity in field conditions, p?
Related JoVE Video
Measurement, analysis and interpretation of examiner reliability in caries experience surveys: some methodological thoughts.
Clin Oral Investig
PUBLISHED: 09-28-2010
Show Abstract
Hide Abstract
Data obtained from calibration exercises are used to assess the level of agreement between examiners (and the benchmark examiner) and/or between repeated examinations by the same examiner in epidemiological surveys or large-scale clinical studies. Agreement can be measured using different techniques: kappa statistic, percentage agreement, dice coefficient, sensitivity and specificity. Each of these methods shows specific characteristics and has its own shortcomings. The aim of this contribution is to critically review techniques for the measurement and analysis of examiner agreement and to illustrate this using data from a recent survey in young children, the Smile for Life project. The above-mentioned agreement measures are influenced (in differing ways and extents) by the unit of analysis (subject, tooth, surface level) and the disease level in the validation sample. These effects are more pronounced for percentage agreement and kappa than for sensitivity and specificity. It is, therefore, important to include information on unit of analysis and disease level (in validation sample) when reporting agreement measures. Also, confidence intervals need to be included since they indicate the reliability of the estimate. When dependency among observations is present [as is the case in caries experience data sets with typical hierarchical structure (surface-tooth-subject)], this will influence the width of the confidence interval and should therefore not be ignored. In this situation, the use of multilevel modelling is necessary. This review clearly shows that there is a need for the development of guidelines for the measurement, interpretation and reporting of examiner reliability in caries experience surveys.
Related JoVE Video
Dealing with misclassification and missing data when estimating prevalence and incidence of caries experience.
Community Dent Oral Epidemiol
Show Abstract
Hide Abstract
The aim of this research was to estimate the prevalence and incidence of caries experience (CE) in first permanent molars while dealing with misclassification and missing of data.
Related JoVE Video
Hierarchical modeling of agreement.
Stat Med
Show Abstract
Hide Abstract
Kappa-like agreement indexes are often used to assess the agreement among examiners on a categorical scale. They have the particularity of correcting the level of agreement for the effect of chance. In the present paper, we first define two agreement indexes belonging to this family in a hierarchical context. In particular, we consider the cases of a random and fixed set of examiners. Then, we develop a method to evaluate the influence of factors on these indexes. Agreement indexes are directly related to a set of covariates through a hierarchical model. We obtain the posterior distribution of the model parameters in a Bayesian framework. We apply the proposed approach on dental data and compare it with the generalized estimating equations approach.
Related JoVE Video

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.