Executive functioning in children with ADHD Investigating the cross-method correlations between performance tests and rating scales
Categoría del artículo: Research Article
Publicado en línea: 19 abr 2024
Páginas: 1 - 9
DOI: https://doi.org/10.2478/sjcapp-2024-0001
Palabras clave
© 2024 Kristoffer Dalsgaard Olsen et al., published by Sciendo
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Objective
Replicated evidence shows a weak or non-significant correlation between different methods of evaluating executive functions (EF). The current study investigates the association between rating scales and cognitive tests of EF in a sample of children with ADHD and executive dysfunction.
Method
The sample included 139 children (aged 6–13) diagnosed with ADHD and executive dysfunctions. The children completed subtests of the Cambridge Neuropsychological Test Automated Battery (CANTAB). Parents completed the Behavior Rating Inventory of Executive Function (BRIEF) and the Children’s Organizational Skills Scale (COSS).
Analysis
Pairwise Spearman correlations were calculated between the composite and separate subscales of cognitive tests and rating scales. In secondary analyses, pairwise Spearman correlations were conducted between all composite scales and subscales, stratified by child sex and child ADHD subtype.
Results
The correlation analyses between composite scores yielded no significant correlations. The results when comparing CANTAB TO and BRIEF GE are r=−.095, p=.289, and r=.042, p=.643 when comparing CANTAB TO and COSS TO. The analyses between all composite scales and subscales found one significant negative correlation (r=−.25, p<.01). There are significant cross-method differences when stratified by the ADHD-Inattentive subtype, showing significant negative correlations (moderate) between CANTAB and BRIEF composite (r=−.355, p=.014) and subscales.
Discussion
It is possible that the different methods measure different underlying constructs of EF. It may be relevant to consider the effects of responder bias and differences in ecological validity in both measurement methods.
Conclusion
The results found no significant correlations. The expectation in research and clinical settings should not be to find the same results when comparing data from cognitive tests and rating scales. Future research might explore novel approaches to EF testing with a higher level of ecological validity, and designing EF rating scales that capture EF behaviors more so than EF cognition.