Review Article

Randomized Controlled Trials of Acupuncture (1997–2007): An Assessment of Reporting Quality with a CONSORT- and STRICTA-Based Instrument

Table 1

Oregon CONSORT STRICTA instrument (OCSI).

Item numberPaper sectionQuestion

1AbstractIs there an explicit statement that patients were randomly assigned to interventions?

2Introduction/Background(a) Is scientific background provided and (b) is the rationale explained?

3Methods(a) Are the eligibility criteria (inclusion and exclusion criteria) stated and (b) are the setting(s) and location(s) where the data was collected described?

4Methods(a) Is the style of acupuncture stated? (b) Is the rationale presented for the selection of acupuncture points? (c) Was the rationale justified?

5MethodsAre the following parameters of needling presented?
(a) Points used (uni/bilateral)
(b) Number of needles inserted
(c) Depth(s) of insertion
(d) Response elicited (e.g., de qi)
(e) Needle stimulation (manual or electrical)
(f) Needle retention time
(g) Needle type (Material and/or manufacturer, gauge, and length)

6MethodsAre the (a) number and (b) frequency of treatments stated?

7MethodsAre details of the acupuncture group cointervention(s) presented? (e.g., moxa, cupping, life-style advice, plum-blossom needling, Chinese herbs)

8MethodsAre descriptions provided of the (a) duration of practitioner training, (b) length of clinical experience, and (c) expertise in specific condition?

9Methods(a) Is the intended effect of the control or comparison intervention presented? (b) Were the specific explanations given to patients of the treatment and control interventions presented? (c) Are details for the control or comparison intervention presented? (d) Are sources provided, that justify the choice of the control or comparison intervention?

10MethodsAre there statements of (a) specific objectives and (b) hypotheses to be tested?

11Methods(a) Are primary and (if applicable) secondary outcome measures clearly defined? (b) Are there statements (when applicable), regarding any methods used to enhance the quality of measurements, for example, multiple observers or training of assessors?

12Methods(a) Is there a statement regarding how the sample size was determined, and (b) if applicable, an explanation of any interim analyses and stopping rules?

13Methods(a) Is the method presented that was used to generate the random allocation sequence, and (b) if applicable, details of any restriction (e.g., blocking, stratification)?

14Methods(a) Is the method presented that was used to implement the random allocation sequence, (b) with clarification as to whether the sequence was concealed until interventions were assigned?

15MethodsAre there statements as to (a) who generated the allocation sequence, (b) who enrolled participants, and (c) who assigned participants to their groups?

16MethodsIs it stated whether or not (a) participants, (b) those administering the interventions, and (c) those assessing the outcomes were blinded? and (d) was the success of participant blinding evaluated?

17Methods(a) Were the statistical methods stated that were used to compare groups for primary outcomes? (b) Were the statistical methods stated that were used for additional analyses such as subgroup or adjusted analyses?

18Results(a) Is the flow of participants through each stage quantitatively described, and (b) if protocol deviations are reported, were reasons presented?

19Results(a) Are dates provided that define the period of recruitment? (b) Is the length of followup (on-treatment and posttreatment) reported?

20Results(a) Are baseline demographics and (b) clinical characteristics presented for each group?

21Results(a) Is the number of participants in each group included in each analysis? (b) Was the “intention to treat” analysis presented? (c) When feasible, are the results stated in absolute numbers (e.g., 10 of 20, not just 50%)?

22ResultsFor each primary and (if applicable) secondary outcome, is (a) a summary of results presented for each group, (b) the estimated effect size presented for each between-group difference (e.g., SD), and (c) the precision of the effect size presented for each between-group difference (e.g., confidence interval (CI))?

23ResultsIf additional subgroup analyses and/or adjusted analyses are reported, is it stated whether they were prespecified or exploratory, that is, not prespecified?
24ResultsAre all important adverse events or side effects presented for each intervention group?

25DiscussionIs an interpretation of the results presented that takes into account (a) study hypotheses, (b) sources of potential bias or imprecision, and (c) the potential dangers associated with multiple analyses and outcomes?

26DiscussionIs the generalizability (external validity) of the trial findings discussed?

27DiscussionIs a general interpretation of the results presented, in the context of current evidence?

Instructions. OCSI evaluates how well an “item” is reported, not whether it was appropriate or adequate. When scoring each question, consider the following. (i) If you were a reviewer of the paper, would you be satisfied with what is reported? (ii) If you were attempting to reproduce the findings, is there sufficient reporting of details to allow you to do so?
Note. Items 4–9 from STRICTA (MacPherson et al., 2002) substitute for item 4 of CONSORT (Altman et al., 2001) [17, 18].