Designs with more than one independent variable (IV) will have many hypotheses pairs (null and research hypotheses). Each IV must have its own hypotheses pair in the following format.
H0: There will be no difference in the <dependent variable> based on differences in the <independent variable>.
H1: There will be a difference in the <dependent variable> based on differences in the <independent variable>.
Therefore, if you have 2 IVs, you will have 2 pairs of hypotheses with each pair having its own null and research hypothesis. If you have 3 IVs, you will have 3 pairs of hypotheses. It is not typical to have more than 3 IVs, and for this class we will definitely limit our designs to 3.
In addition, there may be an interaction between each IV. So, imagine that one of the IVs is study technique for an exam (rereading textbook versus studying PowerPoint slides only) and another IV is class level (freshman, sophomore, junior, or senior). It is reasonable to assume that performance on an exam might be based on either study technique or class level alone. In this case, we say there is a main effect for the IV in question. It is also reasonable to assume, however, that performance on the exam might be based on an interaction between these two variables. So, seniors might perform better using only PowerPoints while freshmen perform better when they only reread the text.
You must have a hypotheses pair for every possible set of interactions. If you have 2 IVs, there is one possible interaction. If you have 3 IVs, there are 4 possible interactions. If your IVs are labeled A, B, and C, the possible interactions are A x B, A x C, B x C, and A x B x C. So, if you have 3 IVs you will have a total of 7 hypotheses pairs (3 main effects, 3 two-way interactions, and 1 three-way interaction. Hypotheses for interactions must be written as follows:
H0: There will be no interaction between the <independent variable A and the independent variable B> in terms of the <dependent variable>. A three-way interaction null hypothesis would substitute <independent variable A, independent variable B, and independent variable C>.
H1: There will be an interaction between the <independent variable A and the independent variable B> in terms of the <dependent variable>.
Independent Variables (IVs):
The IVs must be categorical or be able to be categorized in a theoretically logical manner.
The IVs may be based on subject characteristics in which case the design is quasi-experimental. Any or all of the IVs may consist of a no-treatment control group and two or more treatment experimental groups, in which case the design is experimental.
Note that in the case of complex designs, each IV may have only 2 levels (such as study technique in the example described above) or 3 or more levels (such as class size in the same example).
Dependent Variable (DV):
The DV is a measure obtained after some experimental procedure or test given equally to all groups defined by the IV.
The DV must be measured on an interval or ratio scale.
There cannot be more than one DV.
This design may be used for between-groups analyses and within-groups analyses. If each participant has only one score, then it is a between-groups design. In other words, each participant will only be contributing to one level of one IV within the design. For example, one IV might be study technique, with 10 students studying for an exam using technique A, 10 using technique B, and 10 using technique C. However, a second IV is gender. Within each of the categories, 5 of the students are female. Therefore, each participant has a score for one and only one level of one and only one IV.
If each participant has scores for each level, then it is a within-groups design. For example, one IV is study technique with 3 levels (techniques A, B, and C) and another IV is the time when the exam occurs (before or after 12:00 p.m.). In this instance, the same 10 students would study for different exams using each of the 3 techniques and be asked to take different exams at both time periods. Note that in this case, each participant would contribute 6 scores to the analysis (other variations are possible).
Statistical power is usually determined before deciding how many participants must be recruited. Power depends on the effect size you predict; smaller effect sizes require more participants. For our purposes, we will say that there must be at least 10 participants per group; more participants per group is preferable if it is practical.
Different levels of the IV may have unequal numbers of participants, but the levels should not be too unequal.
Participants must be randomly assigned to groups.
Analysis of between-group designs uses Two- or Three-Way ANOVA (ANalysis Of VAriance). Analysis of within-group designs uses Two- or Three-Way Repeated Measures ANOVA.
If the result of ANOVA determines there is a statistically significant difference, post-hoc analyses must be conducted to determine which pairings of the levels in the IV are different. Typically, one will use the Tukey Test for Honestly Significant Differences.
If the result of ANOVA determines there is a statistically significant difference, it is common to calculate the effect size, or the amount of variability in the DV based on differences in the IV. This number will vary between 0.0 (no effect) and 1.0 (complete determination). Effect size is also called eta-squared (η2).
Designs 4: Assignment
Choose 1 (circle): Between-Subjects Design
Independent Variables (be sure to list all levels of each IV):
List All Hypotheses (3 pairs for 2 IVs; 7 pairs for 3 IVs):
Description of Participants:
Brief Description of Procedure: