The two-way ANOVA is used to determine whether there is an interaction effect between two independent variables on a continuous dependent variable (i.e., if a two-way interaction effect exists). In many ways, the two-way ANOVA can be considered as an extension of the one-way ANOVA, which deals with just one independent variable rather than the two-way independent variables of the two-way ANOVA.

**Note:** It is quite common for the independent variables to be called “factors” or “between-subjects factors”, but we will continue to refer to them as independent variables in this guide. Furthermore, it is worth noting that the two-way ANOVA is also referred to as a “factorial ANOVA” or, more specifically, as a “two-way between-subjects ANOVA”.

A two-way ANOVA can be used in a number of situations. For example, consider an experiment where two drugs were being given to elderly patients to treat heart disease. One of the drugs was the current drug being used to treat heart disease and the other was an experimental drug that the researchers wanted to compare to the current drug. The researchers also wanted to understand how the drugs compared in low and high-risk elderly patients. The goal was for the drugs to lower cholesterol concentration in the blood. The two independent variables are **drug** with two levels (“Current” and “Experimental”) and **risk** with two levels (“Low” and “High”). The dependent variable was **cholesterol** (i.e., cholesterol concentration in the blood). The researchers want to know: (a) whether the experimental drug is better or worse than the current drug at lowering cholesterol; and (b) whether the effect of the two drugs is different depending on whether elderly patients are classified at low or high risk. These two aims are entirely typical of a two-way ANOVA analysis. Importantly, the second aim of these researchers is answered by determining whether there is a statistically significant interaction effect. This is usually given first priority in a two-way ANOVA analysis because its result will determine whether the researchers’ first aim is misleading or incomplete. Assuming that a statistically significant interaction effect is found, this indicates that the two drugs have different effects in low and high-risk elderly patients (i.e., the effect of **drug** on **cholesterol** depends on level of **risk**). Depending on whether you find a statistically significant interaction, and the type of interaction, will determine which effects in the two-way ANOVA you should interpret and any post hoc tests you may want to run. These issues are explained as you work through the guide, so you will know exactly what each statistical test is telling you and how to write up your results accurately.

**Note:** A two-way ANOVA can be described by the number of groups in each independent variable. So, for example, if you had a two-way ANOVA with gender (male/female) and transport type (bus/train/car) as the independent variables, you could describe this as a 2 x 3 ANOVA. This is a fairly generic way to describe ANOVAs with two or more between-subjects factors (e.g., a three-way ANOVA could be written 2 x 3 x 5 ANOVA if another independent variable was included which had five groups).

## Assumptions

In order to run a two-way ANOVA, there are six assumptions that need to be considered. The first three assumptions relate to your choice of study design and the measurements you chose to make, whilst the second three assumptions relate to how your data fits the two-way ANOVA model. These assumptions are:

- Assumption #1: You have
**one dependent variable**that is measured at the**continuous**level (i.e., the**interval**or**ratio**level). Examples of**continuous variables**include height (measured in meters and centimeters), temperature (measured in °C), salary (measured in US dollars), revision time (measured in hours), intelligence (measured using IQ score), firm size (measured in terms of the number of employees), age (measured in years), reaction time (measured in milliseconds), grip strength (measured in kg), weight (measured in kg), power output (measured in watts), test performance (measured from 0 to 100), sales (measured in number of transactions per month), academic achievement (measured in terms of GMAT score), and so forth.

**Note:** You should note that SPSS Statistics refers to continuous variables as **Scale** variables.

- Assumption #2: You have
**two independent variables**where**each independent variable**consists of**two or more categorical**,**independent groups**. An independent variable with only**two groups**is known as a**dichotomous variable**whereas an independent variable with**three or more groups**is referred to as a**polytomous**variable. Example independent variables that meet this criterion include gender (e.g., two groups: male and female), ethnicity (e.g., three groups: Caucasian, African American, and Hispanic), physical activity level (e.g., four groups: sedentary, low, moderate and high), profession (e.g., five groups: surgeon, doctor, nurse, dentist, therapist), and so forth. If you need more information about variables and their different types of measurement, please contact us).

**Explanation 1:** The “groups” of the independent variable are also referred to as “categories” or “levels”, but the term “levels” is usually reserved for groups that have an order (e.g., fitness level, with three levels: “low”, “moderate” and “high”). However, these three terms – “groups”, “categories” and “levels” – can be used interchangeably. We will mostly refer to them as groups, but in some cases, we will refer to them as levels. The only reason we do this is for clarity (i.e., it sometimes sounds more appropriate in a sentence to use levels instead of groups, and vice versa).

**Explanation 2:** The independent variable(s) in any type of ANOVA is also commonly referred to as a **factor**. For example, a two-way ANOVA is an ANOVA analysis involving two factors (i.e., two independent variables). Furthermore, when an independent variable/factor has independent groups (i.e., unrelated groups), it is further classified as a **between-subjects factor** because you are concerned with the differences in the dependent variable between different subjects. However, for clarity, we will simply refer to them as independent variables in this guide.

**Note:** For the two-way ANOVA demonstrated in this guide, the independent variables are referred to as **fixed factors** or **fixed effects**. This means that the groups of each independent variable represent all the categories of the independent variable you are interested in. For example, you might be interested in exam performance differences between schools. If you investigated three different schools and it was only these three schools that you were interested in, the independent variable is a **fixed factor**. However, if you picked the three schools at random and they were meant to represent all schools, the independent variable is a **random factor**. This requires a different statistical test because the two-way ANOVA is the incorrect statistical test in these circumstances. If you have a random factor in your study design, please __contact us__ and we will look at adding an SPSS Statistics guide to help with this.

- Assumption #3: You should have
**independence of observations**, which means that there is no relationship between the observations in each group of the independent variable or between the groups themselves. Indeed, an important distinction is made in statistics when comparing values from either different individuals or from the same individuals. Independent groups (in a two-way ANOVA) are groups where there is no relationship between the participants in any of the groups. Most often, this occurs simply by having different participants in each group.

**Note:** When we talk about the **observations being independent**, this means that the observations (e.g., participants) are **not related**. Specifically, it is the **errors** that are assumed to be independent. In statistics, errors that are not independent are often referred to as **correlated errors**. This can lead to some confusion because of the similarity of the name to that of tests of correlation (e.g., Pearson’s correlation), but correlated errors simply means that the errors are not independent. The errors are at high risk of not being independent if the observations are not independent.

For example, if you split a group of individuals into four groups based on their physical activity level (e.g., a “sedentary” group, “low” group, “moderate” group and “high” group), no one in the sedentary group can also be in the high group, no one in the moderate group can also be in the high group, and so forth. As another example, you might randomly assign participants to either a control trial or one of two interventions. Again, no participant can be in more than one group (e.g., a participant in the the control group cannot be in either of the intervention groups). This will be true of any independent groups you form (i.e., a participant cannot be a member of more than one group). In actual fact, the ‘no relationship’ part extends a little further and requires that participants in different groups are considered unrelated, not just different people. Furthermore, participants in one group cannot influence any of the participants in any other group.

An example of where related observations might be a problem is if all the participants in your study (or the participants within each group) were assessed together, such that a participant’s performance affects another participant’s performance (e.g., participants encourage each other to lose more weight in a ‘weight loss intervention’ when assessed as a group compared to being assessed individually; or athletic participants being asked to complete ‘100m sprint tests’ together rather than individually, with the added competition amongst participants resulting in faster times, etc.). This can occur when you have tested individuals in blocks (e.g., 10 participants at a time) to make life easier for yourself or due to other constraints. However, the participants in each block might provide more similar results than those from other blocks. Participants might also be considered related due to their inherent or preselected attributes. For example, your sample may consist of twins or a husband and wife, and yet you may have considered them to be unrelated when they are related. Alternately, you have repeatedly tested the same participant and not expected him or her to react more similarly than another participant. If you are using the same participants in each group or they are otherwise related, a __two-way repeated measures ANOVA__ is a more appropriate test (or a __two-way mixed ANOVA__ if only one of your independent variables consists of related groups).

Independence of observations is largely a study design issue rather than something you can test for using SPSS Statistics, but it is an important assumption of the two-way ANOVA. If your study fails this assumption, you will need to use another statistical test instead of the two-way ANOVA.

- Assumption #4: There should be no significant outliers in any cell of the design. Outliers are simply data points within your data that do not follow the usual pattern (e.g., in a study of 100 students’ IQ scores, where the mean score was 108 with only a small variation between students, one student had a score of 156, which is very unusual, and may even put her in the top 1% of IQ scores globally). The problem with outliers is that they can have a negative impact on the two-way ANOVA by: (a) distorting the differences between cells of the design; and (b) causing problems when generalizing the results (of the sample) to the population.

- Assumption #5: Your dependent variable (residuals) should be approximately normally distributed for each cell of the design. The assumption of normality is necessary for statistical significance testing using a two-way ANOVA. However, the two-way ANOVA is considered “robust” to violations of normality. This means that some violation of this assumption can be tolerated and the test will still provide valid results. Therefore, you will often hear of this test only requiring
*approximately*normally distributed data. Furthermore, as sample size increases, the distribution can be quite non-normal and, thanks to the Central Limit Theorem, the two-way ANOVA can still provide valid results. Unfortunately, how large is large enough is not well known (e.g., Wilcox, 2012a). Also, it should be noted that if the distributions are all skewed in a similar manner (e.g., all moderately negatively skewed), this is not as troublesome when compared to the situation where you have groups that have differently-shaped distributions (e.g., each combination of groups has different skews). Again, technically, the assumption of normality is with respect to the residuals and not the raw data. Therefore, in this example, you need to investigate whether the residuals, RES_1, are normally distributed in each cell of the design.

- Assumption #6: The variance of your dependent variable (residuals) should be equal in each cell of the design. This assumption is referred to as the assumption of homogeneity of variances. It requires that the (population) variance of the residuals, RES_1, is the same in each cell of the design. This assumption is necessary for statistical significance testing in the two-way ANOVA. Although this assumption can be violated a little in studies with equal, but not small, sample sizes in each cell of the design, it is still considered an important assumption. You can determine whether this assumption is met using
**Levene’s test for equality of variances**.

## Interpreting Results

After running the two-way ANOVA procedure in the __Procedure__ section earlier, SPSS Statistics will have generated a number of tables and graphs that provide the starting point to interpret your results.

You will find a general overview of the procedure that needs to be taken to analyze your results depending on the result of your interaction effect. It is intended to give you a clearer image of the route that you might follow in this **Interpreting Results** section. In this regard, there are two main steps you can follow to interpret the results of your two-way ANOVA. First, you need to determine whether a statistically significant **interaction effect** exists (STEP #1). This starts the process of interpreting your results. Both main steps are briefly explained below.

- Step #1

Do you have a statistically significant interaction effect? The primary goal of running a two-way ANOVA is to determine whether there is an interaction between the two independent variables on the dependent variable. One of these two independent variables can act as a focal variable and the other as a moderator variable depending on your study design. For example, we wanted to determine whether there was an interaction between the two independent variables, gender, and education_level, on the dependent variable, political_interest. Our focal variable was education level (i.e., education_level) and our moderator variable was gender (i.e., gender). If**yes**– you have a statistically significant interaction effect – go to STEP 2A.

If**no**– you**do not**have a statistically significant interaction effect – go to STEP 2B.

- Step #2A

You have a statistically significant interaction effect. Do you have any statistically significant simple main effects or interaction contrasts? When the interaction term is statistically significant, this indicates that the effect that one independent variable (e.g., education_level) has on the dependent variable (e.g., political_interest) depends on the level of the other independent variable (e.g., gender).In our example, this means that we are comparing two effects: (a) the effect of education level on interest in politics in males; and (b) the effect of education level on interest in politics in females. These two effects are called**simple effects**or, more commonly,**simple main effects**(i.e., there are two simple main effects: one for males and one for females). Therefore, if you have a statistically significant interaction effect you can follow up this result by running simple main effects.

**Note:** It is also possible to follow up with a statistically significant interaction with interaction effects rather than simple main effects.

- Step #2B

You**do not**have a statistically significant interaction effect. Do you have any statistically significant main effects? When you do not have a statistically significant interaction effect, this indicates that the effect of an independent variable is the same for each level of the other independent variable. In other words, the simple main effects, mentioned in Step #2A above, are all equal. In our example, if there was not a statistically significant interaction effect, this would mean that the effect of education level on interest in politics is the same for males and females. As such, it might make sense to consider these simple main effects together and come up with an overall measure of the effect of education level*ignoring*gender. That is, why bother separating out the effects for males and females when they are the same? Just consider them together. We can do this by “averaging” the simple main effects. This is called a**main effect**. It is customary to interpret the main effects and, if a main effect is statistically significant, follow up with a post hoc analysis (e.g., all pairwise comparisons).

## Leave A Comment