The Manchester-Modified Disability of Arm, Shoulder and Hand questionnaire (M(2) DASH) was developed by the authors as a modification to the original DASH questionnaire. In this study, we assessed the validity, reliability, responsiveness, and bias of the M(2) DASH questionnaire for hand injuries using completed M(2) DASH, Patient Evaluation Measure, and Michigan Hand Outcome questionnaires from 40 patients. The M(2) DASH scores showed significant positive correlations with the Patient Evaluation Measure and Michigan Hand Outcome scores suggesting validity. There was also no evidence of a statistical difference in the M(2) DASH scores when the condition had stabilized suggesting good test-retest reproducibility and reliability. The effect size and the standardized response mean for the M(2) DASH score were greater than those for the Patient Evaluation Measure and Michigan Hand Outcome scores establishing that the M(2) DASH is highly responsive. There was no gender, hand dominance, or dominant side injured bias for the M(2) DASH score. There was, however, a relatively weak association between age and the M(2) DASH score at presentation. We conclude that the M(2) DASH questionnaire is a robust region-specific outcome measure. It is a valid and responsive questionnaire with test-retest reliability proven for hand injuries in this study. Gender, handedness, and side injured did not cause bias in the responses.
Related JoVE Video
Journal of Visualized Experiments
What is Visualize?
JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.
How does it work?
We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.
Video X seems to be unrelated to Abstract Y...
In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.