Accesso libero

Comparison of validity, repeatability and reproducibility of the Peer Assessment Rating (PAR) between digital and conventional study models

INFORMAZIONI SU QUESTO ARTICOLO

Cita

Introduction

The validity, reliability and inter-method agreement of Peer Assessment Scores (PAR) from acrylic models and their digital analogues were assessed.

Method

Ten models of different occlusions were digitised, using a 3 Shape R700 laser scanner (Copenhagen, Denmark). Each set of models was conventionally and digitally PAR-scored twice in random order by 10 examiners. The minimum time between repeat measurements was two weeks. The repeatability was assessed by applying Carstensen’s analysis. Inter-method agreement (IEMA) was assessed by Carstensen’s limit of agreement (LOA).

Results

Intra-examiner repeatability (IER) for the unweighted and weighted data was slightly better for the conventional rather than the digital models. There was a slightly higher negative bias of -1.62 for the weighted PAR data for the digital models. IEMA for the overall weighted data ranged from −8.70 – 5.45 (95% Confidence Interval, CI). Intra-class Correlation Coefficients (ICC) for the weighted data for conventional, individual and average scenarios were 0.955 (0.906 – 0.986 CI), 0.998 (0.995 – 0.999 CI). ICC for the weighted digital data, individual and average scenarios were 0.99 (0.97 – 1.00) and 1.00. The percentage reduction required to achieve an optimal occlusion increased by 0.4% for the digital scoring of the weighted data.

Conclusion

Digital PAR scores obtained from scanned plastic models were valid and reliable and, in this context, the digital semi-automated method can be used interchangeably with the conventional method of PAR scoring.

eISSN:
2207-7480
Lingua:
Inglese
Frequenza di pubblicazione:
Volume Open
Argomenti della rivista:
Medicine, Basic Medical Science, other