Error-rate before and after (which statistical test)

#1
Guys I've done advanced analysis a long time ago and my statistical sense is now rusted. I couldn't come up with confident answers on a quick review and would appreciate input.

What would be the correct statistical test for comparing prescription-error rate before (1 year before, duration 1 month) and 2 months after an intervention, duration again 1 month.
Should I just be comparing percentages, running a chi-square test?

Thanks,

StatsClue
 

Karabiner

TS Contributor
#2
Could you please provide information about the study? E.g. what is the study about,
what are the research questions, how was the study designed (what was actually measured,
and how was it measured; when were measurements taken; were the same subjects/entities
measured at different occasions, how large were sample sizes)?

With kind regards

Karabiner
 
#3
Thanks Karabiner.

Question: Is the prescription error-rate significantly different after the introduction of electronic prescribing?

Data will be collected retrospectively. There is already in place a system to report prescription-errors. It was in place before and has remained in place after the introduction of e-prescribing (as opposed to paper-based prescribing).

Lets say e-prescribing was introduced in July 2018. The plan is to look at all reported prescription errors, in the month of March 2018 (pre-electronic prescribing) and in the month of March 2019 (post-electronic prescribing). The denominator will be the total number of prescriptions in these months and the percentage of prescription errors will thereby be obtained. If this denominator data is not available, I may need to deduce this in some way, for example -- number of patients and average number of prescriptions per patient.

Also, in addition to those two months, I will also be interested at looking at prescription-error rate change sooner, say just a month after the introduction of e-prescribing, and then also 2 month after, so see if there is a trend (improvement / worsening / no change). The reason for choosing March and March is to negate the effects of staff changeover (new staff--- errors more likely) in particular months and also seasonal effects on patient volume (more volume--- more pressured staff---more errors likely).

I expect the denominator to be in the range of 2000 to 3000.

Thanks.
 

hlsmith

Not a robit
#5
Interrupted time series would be ideal, thoug h you need enough measurements. I am imagining errors can be clustered in clinicians, for some types. Can you name off some of the error types recorded?

will you have some of the same patients in both periods? I am guesses you will, so you need to question whether they have a correlated risk, i f so thismay be address with some type of robust standard error.