Appraisal Tool for Evaluating Experimental and Quasi-Experimental Designs
1) Briefly discuss the review of literature by the researchers. For example:
a) Does the review seem thorough? Does it include all or most of the major studies on the topic?
b) Does the review rely on appropriate materials (e.g. mainly on research reports, using primary sources)?
c) Is the review merely a summary of existing work, or does it critically appraise and compare key studies? Does the review identify important gaps in the literature?
d) Is the review well organized? Is the development of ideas clear?
e) Does the review identify gaps in the literature and support the need for the study?
f) Does the review indicate the significance of the research question?
2) Briefly discuss the hypothesis(ses)/research questions/purpose (whichever they have) posed by the researchers. For example:
a) What are the statements of purpose, research questions, and/or hypotheses? Is the information communicated clearly and concisely, and is it placed in a logical and useful location?
b) Are purpose statements or questions worded appropriately and clearly? E.g. Are key concepts/variables identified and the population of interest specified? Are verbs used appropriately to suggest the nature of the inquiry and/or the research tradition?
c) Does the research question express a relationship between two or more variables or at least between an independent and a dependent variable, implying empirical testability?
d) If there are no formal hypotheses, is their absence justified? Are inferential statistics used in analyzing the data despite the absence of stated hypotheses?
e) Do hypotheses (if any) flow from a theory or previous research? How do the hypotheses related to the research problem? Is there a justifiable basis for the predictions?
f) Are hyptheses (if any) properly worded – do they state a predicted relationship between two or more variables? Are they directional or nondirectional, and is there a rational for how they were stated? Are they presented as research or as null hypotheses?
3) Identify the specific type of research design used and briefly discuss whether or not the authors used an appropriate method to answer their question. For example,
a) Does the study purpose match the study design?
b) If the study used a mixed-method design, was this approach appropriate? Comment on the timing of collecting the various types of data. How did the inclusion of both approaches contribute to enhanced theoretical insights or enhanced validity?
c) What did the researcher do to control confounding external factors and intrinsic subject characteristics?
d) What steps did the researcher take to enhance the internal validity and external validity? To what extent were those steps successful?
e) What are the major limitations of the design used? Are these limitations acknowledged by researcher and taken into account in interpreting results?
4) Briefly discuss the method used to recruit the subjects. For example,
a) What type of sampling plan was used (true experimental design must use randomization while quasi-experimental may not)? Was the sampling plan one that could be expected to yield a representative sample?
b) How were subjects recruited into experimental and control groups (quasi-experimental design may not have a control group)? Does the method suggest potential biaes, e.g. selection bias, selection effects, reactive effects, etc.?.
c) Did some factor other than the sampling plan (e.g. a low response rate, non-random drop off) affect the representativeness of the sample?
d) Was the sample size justified on the basis of a power analysis or other rational so that the sample size is sufficiently large to support statistical conclusion validity?
5) Briefly discuss the intervention. For example,
a) Did the researchers operationally define the intervention?
b) Was there a well-conceived intervention theory that guided the endeavor? Was the intervention adequately pilot tested?
c) Were the researchers consistent in administering the intervention?
d) Were the experimental group subjects and the control subjects separated (or Is there potential contamination of treatment for the control group)?
e) Did the research include blinding? If not, should blinding have been included in the design?
6) Briefly discuss the outcome variables, including instrument used and data collection method. For example,
a) Were they operationally defined?
b) Is there congruence between the research variables as conceptualized (theoretical definition, as discussed in the introduction, literature review or theoretical framework section) and as operationalized (as described in the method section)?
c) Were the measurement tools valid and reliable? What were the reliability and validity reported in the article? Does the evidence come from the research sample itself or is it based on other studies? If there is no reliability and validity information, what conclusion can you reach about the quality of the data in the study?
d) If a diagnostic or screening tool/equipment was used, is information provided about its sensitivity and specificity and were these qualities adequate?
e) Were the data collected in such a way that measurement errors were minimized? For example, did the researchers change data collection method from face-to-face interview to phone or mailed survey?
7) Briefly discuss any ethical issues
a) Did the researcher take appropriate steps to remove or prevent harm?
b) Was any type of coercion or undue influence used to recruit participants?
c) Did the participants have the right to refuse to participate or withdraw without penalty?
d) Were adequate steps taken to safeguard the privacy of participants?
e) How were data kept anonymous or confidential?
f) Were groups omitted from the inquiry without a justifiable rational?
g) Were vulnerable groups involved in the research? If yes, were special precautions instituted because of their vulnerable status?
8) Briefly discuss the study results. For example,
a) Did the researchers analyze the data appropriately?
b) Did the researchers discuss the threats/confounding variables/issues in relation to the results?
c) Were the results significant? I.e., Was a p-value reported? Were risk ratios or odds ratios reported? What about a confidence interval? Were power data included?
9) Briefly discuss how confident you are in the results. For example,
a) Is the design strong enough for the results to be credible?
b) Or, is the study too flawed to be valid and reliable?
10) Briefly discuss the generalizability of the results. In other words, can the results be generalized to YOUR population?