Categories
Help

PCA

Principal components analysis (PCA, for short) is a variable-reduction technique that shares many similarities to exploratory factor analysis. Its aim is to reduce a larger set of variables into a smaller set of ‘artificial’ variables (called principal components) that account for most of the variance in the original variables. Although principal components analysis is conceptually different from factor analysis, it is often used interchangeably with factor analysis in practice and is included within the Factor procedure in SPSS Statistics.

Assumptions

In order to run a principal components analysis, the following four assumptions must be met. The first assumption relates to your choice of study design, whilst the remaining three assumptions reflect the nature of your data:

  • Assumption #1: You have multiple variables that are measured at the continuous level (although ordinal data is very frequently used). Examples of continuous variables include revision time (measured in hours), intelligence (measured using IQ score), exam performance (measured from 0 to 100), weight (measured in kg), and so forth. Examples of ordinal variables include Likert items (e.g., a 7-point scale from “strongly agree” through to “strongly disagree”), amongst other ways of ranking categories (e.g., a 5-point scale explaining how much a customer liked a product, ranging from “Not very much” to “Yes, a lot”).

Note: Principal components analysis is a variable reduction technique and does not make a distinction between independent and dependent variables.

  • Assumption #2: There should be a linear relationship between all variables. The first assumption, that variables are linearly related, needs to be tested before you run a principal components analysis. Although this can be tested using a matrix scatterplot this is often considered overkill as the scatterplot can sometimes have over 500 linear relationships. As such, it is suggested that you randomly select just a few possible relationships between variables and test these. Non-linear relationships can be transformed. The reason for this assumption is that a principal components analysis is based on Pearson correlation coefficients and, as such, there needs to be a linear relationship between the variables. In actual practice, this assumption is somewhat relaxed (even if it shouldn’t be) with the use of ordinal data for variables.
  • Assumption #3: There should be no outliers. The second assumption of no outliers is important as these can have a disproportionate influence on the results. SPSS Statistics recommends determining outliers as component scores greater than 3 standard deviations away from the mean. As component scores are the last to be calculated in a principal components analysis, outliers are considered last.
  • Assumption #4: There should be large sample sizes for a principal components analysis to produce a reliable result. Many different rules-of-thumb have been proposed that differ mostly by either using absolute sample size numbers or a multiple of the number of variables in your sample. Generally speaking, a minimum of 150 cases or 5 to 10 cases per variable have been recommended as minimum sample sizes.

Interpreting Results

SPSS Statistics will have generated a number of tables and graphs that contain most of the information you need to report the results of a principal components analysis.

The output generated by SPSS Statistics is quite extensive and can provide a lot of information about your analysis. However, you will often find that the analysis is not yet complete and more re-runs of the analysis will have to take place before you get to your final solution. We will focus on: (a) communalities; (b) extracting and retaining components; and (c) forced factor extraction.

  • Communalities: The communality is the proportion of each variable’s variance that is accounted for by the principal components analysis and can also be expressed as a percentage. 
  • Extracting and retaining components: A principal components analysis will produce as many components as there are variables. However, the purpose of principal components analysis is to explain as much of the variance in your variables as possible using as few components as possible. After you have extracted your components, there are four major criteria that can help you decide on the number of components to retain: (a) the eigenvalue-one criterion, (b) the proportion of total variance accounted for, (c) the scree plot test, and (d) the interpretability criterion. All except for the first criterion will require some degree of subjective analysis.
  • Forced factor extraction: When extracting components as part of your principal components analysis, SPSS Statistics does this based on the eigenvalue-one criterion. However, it is possible to instruct SPSS Statistics how many components you want to retain.

Contact Us 24/7 for a Free Consultation

Call Us Now at (669) 900-1414!

Stay current on your favorite topics

Categories
Help

Two-Way MANOVA

The two-way multivariate analysis of variance (two-way MANOVA) is often considered as an extension of the two-way ANOVA for situations where there are two or more dependent variables. The primary purpose of the two-way MANOVA is to understand if there is an interaction between the two independent variables on the two or more combined dependent variables.

For example, you could use a two-way MANOVA to understand whether there were differences in students’ short-term and long-term recall of facts based on lecture duration and fact type (i.e., the two dependent variables are “short-term memory recall” and “long-term memory recall”, whilst the two independent variables are “lecture duration”, which has four groups – “30 minutes”, “60 minutes”, “90 minutes” and “120 minutes” – and “fact type”, which has two groups: “quantitative (numerical) facts” and “qualitative (textual/contextual) facts”). Alternately, you could use a two-way MANOVA to understand whether there were differences in the effectiveness of male and female police officers in dealing with violent crimes and crimes of a sexual nature taking into account a citizen’s gender (i.e., the two dependent variables are “perceived effectiveness in dealing with violent crimes” and “perceived effectiveness in dealing with sexual crimes”, whilst the two independent variables are “police officer gender”, which has two categories – “male police officers” and “female police offices” – and “citizen gender”, which also has two categories: “male citizens” and “female citizens”).

Assumptions

In order to run a two-way MANOVA, there are 10 assumptions that need to be considered. The first three assumptions relate to your choice of study design and the measurements you chose to make, whilst the remaining seven assumptions relate to how your data fits the two-way MANOVA model. These assumptions are:

  • Assumption #1: You have two or more dependent variables that are measured at the continuous level. Examples of continuous variables include include height (measured in centimetres), temperature (measured in °C), salary (measured in US dollars), revision time (measured in hours), intelligence (measured using IQ score), firm size (measured in terms of the number of employees), age (measured in years), reaction time (measured in milliseconds), grip strength (measured in kg), power output (measured in watts), test performance (measured from 0 to 100), sales (measured in number of transactions per month), academic achievement (measured in terms of GMAT score), and so forth.

Note: You should note that SPSS Statistics refers to continuous variables as Scale variables.

  • Assumption #2: You have two independent variables where each independent variable consists of two or more categoricalindependent groups. An independent variable with only two groups is known as a dichotomous variable whereas an independent variable with three or more groups is referred to as a polytomous variable. Example independent variables that meet this criterion include gender (e.g., two groups: male and female), ethnicity (e.g., three groups: Caucasian, African American, and Hispanic), physical activity level (e.g., four groups: sedentary, low, moderate and high), profession (e.g., five groups: surgeon, doctor, nurse, dentist, therapist), and so forth. If you need more information about variables and their different types of measurement, please contact us.

Explanation 1: The “groups” of the independent variable are also referred to as “categories” or “levels”, but the term “levels” is usually reserved for groups that have an order (e.g., fitness level, with three levels: “low”, “moderate” and “high”). However, these three terms – “groups”, “categories” and “levels” – can be used interchangeably. We will mostly refer to them as groups, but in some cases we will refer to them as levels. The only reason we do this is for clarity (i.e., it sometimes sounds more appropriate in a sentence to use levels instead of groups, and vice versa).

Explanation 2: The independent variable(s) in any type of MANOVA is also commonly referred to as a factor. For example, a two-way MANOVA is a MANOVA analysis involving two factors (i.e., two independent variables). Furthermore, when an independent variable/factor has independent groups (i.e., unrelated groups), it is further classified as a between-subjects factor because you are concerned with the differences in the dependent variables between different subjects. However, for clarity we will simply refer to them as independent variables in this guide.

Note: For the two-way MANOVA demonstrated in this guide, the independent variables are referred to as fixed factors or fixed effects. This means that the groups of each independent variable represent all the categories of the independent variable you are interested in. For example, you might be interested in exam performance differences between schools. If you investigated three different schools and it was only these three schools that you were interested in, the independent variable is a fixed factor. However, if you picked the three schools at random and they were meant to represent all schools, the independent variable is a random factor. This requires a different statistical test because the two-way MANOVA is the incorrect statistical test in these circumstances. If you have a random factor in your study design, please contact us.

  • Assumption #3: You should have independence of observations, which means that there is no relationship between the observations in each group of the independent variable or between the groups themselves. Indeed, an important distinction is made in statistics when comparing values from either different individuals or from the same individuals. Independent groups (in a two-way MANOVA) are groups where there is no relationship between the participants in any of the groups. Most often, this occurs simply by having different participants in each group. This is generally considered the most important assumption (Hair et al., 2014). Violation of this assumption is very serious (Stevens, 2009; Pituch & Stevens, 2016).

Note: When we talk about the observations being independent, this means that the observations (e.g., participants) are not related. More specifically, it is the errors that are assumed to be independent. In statistics, errors that are not independent are often referred to as correlated errors. This can lead to some confusion because of the similarity of the name to that of tests of correlation (e.g., Pearson’s correlation), but correlated errors simply means that the errors are not independent. The errors are at high risk of not being independent if the observations are not independent.

For example, if you split a group of individuals into four groups based on their physical activity level (e.g., a “sedentary” group, “low” group, “moderate” group and “high” group), no one in the sedentary group can also be in the high group, no one in the moderate group can also be in the high group, and so forth. As another example, you might randomly assign participants to either a control trial or one of two interventions. Again, no participant can be in more than one group (e.g., a participant in the the control group cannot be in either of the intervention groups). This will be true of any independent groups you form (i.e., a participant cannot be a member of more than one group). In actual fact, the ‘no relationship’ part extends a little further and requires that participants in different groups are considered unrelated, not just different people. Furthermore, participants in one group cannot influence any of the participants in any other group.

Independence of observations is largely a study design issue rather than something you can test for using SPSS Statistics, but it is an important assumption of the two-way MANOVA. If your study fails this assumption, you will need to use another statistical test instead of the two-way MANOVA.

  • Assumption #4: There should be a linear relationship between the dependent variables for each group of the independent variable. In a two-way MANOVA, there needs to be a linear relationship between each pair of dependent variables for each group of the independent variable. In this example, there is only one pair of dependent variables because there are only two dependent variables, humanities_score, and science_score. If the variables are not linearly related, the power of the test is reduced (i.e., it can lead to a loss of power to detect differences). You can test whether a linear relationship exists by plotting and visually inspecting a scatterplot matrix for each group combination of the independent variables, gender, and intervention, to see if a linear relationship exists. If the relationship approximately follows a straight line, you have a linear relationship. However, if you have something other than a straight line, for example, a curved line, you do not have a linear relationship.
  • Assumption #5: There should be no multicollinearityIdeally, you want your dependent variables to be moderately correlated with each other. If the correlations are low, you might be better off running separate two-way ANOVAs – one for each dependent variable – rather than a two-way MANOVA. Alternately, if the correlation(s) are too high (greater than 0.9), you could have multicollinearity. This is problematic for MANOVA and needs to be screened out. Whilst there are a great deal of complicated, but sophisticated methods of detecting multicollinearity, we show you the relatively simple method of detecting multicollinearity using Pearson correlation coefficients between the dependent variables to determine if there are any relationships that are too strongly correlated.
  • Assumption #6: There should be no univariate or multivariate outliers. There should be no univariate outliers in each group combination of the independent variables (i.e., for each cell of the design) for any of the dependent variables. Univariate outliers are often just called “outliers” and are the same type of outliers you will have come across if you have conducted t-tests or ANOVAs. In fact, this is a similar assumption to the two-way ANOVA, but for each dependent variable that you have in your MANOVA analysis. We refer to them as univariate in this guide to distinguish them from multivariate outliers, which you also have to test for. Univariate outliers are scores that are unusual in any cell of the design in that their value is extremely small or large compared to the other scores (e.g., 8 participants in a group scored between 60-75 out of 100 in a difficult maths test, but one participant scored 98 out of 100). Outliers can have a large negative effect on your results because they can exert a large influence (i.e., change) on the mean and standard deviation for that group, which can affect the statistical test results. Outliers are more important to consider when you have smaller sample sizes, as the effect of the outlier will be greater. Therefore, in this example, you need to investigate whether the dependent variables, humanities_score and science_score, have any univariate outliers for each group combination of gender and intervention (i.e., you are testing whether humanities score and science score are outlier free for each cell of the design).In addition to univariate outliers, you also have to test for multivariate outliers in a two-way MANOVA analysis. Multivariate outliers are cases (e.g., pupils in our example) that have an unusual combination of scores on the dependent variables. SPSS Statistics can calculate a measure called Mahalanobis distance that can be used to determine whether a particular case might be a multivariate outlier. You can test for multivariate outliers in SPSS Statistics by calculating Mahalanobis distance.
  • Assumption #7: There needs to be multivariate normality. The MANOVA needs the data to be multivariate normal. Unfortunately, multivariate normality is a particularly tricky assumption to test for and cannot be directly tested in SPSS Statistics. Instead, normality of each of the dependent variables for each group combination of the independent variables is often used in its place as a best ‘guess’ as to whether there is multivariate normality.

Explanation: If there is multivariate normality, there will be normally distributed data (residuals) for each of the group combinations of the independent variables for all the dependent variables. However, the opposite is not true; normally distributed group residuals do not guarantee multivariate normality.

Therefore, in this example, you need to investigate whether humanities_score and science_score are normally distributed for each cell of the design.

Note: Whilst it is most common to run only one type of normality test for a given analysis and to rely solely on that result, as you become more familiar with statistics you might start to evaluate normality based on the result of more than one method. If you have another method you would like to use or are curious about other methods (e.g., skewness and kurtosis values, or histograms), please contact us.

  • Assumption #8: You should have an adequate sample size. Although the larger your sample size, the better, at a bare minimum, there needs to be as many cases (e.g., pupils) in each cell of the design as there are number of dependent variables. In this example, this means that there needs to be two or more cases per cell of the design.
  • Assumption #9: There should be homogeneity of variance-covariance matrices. A further assumption of the two-way MANOVA is that there are similar variances and covariances. This assumption can be tested using Box’s M test of equality of covariance.
  • Assumption #10: There should be homogeneity of variances. The two-way MANOVA assumes that there are equal variances in cell of the design for each dependent variable. This can be tested using Levene’s test of equality of variances.

Note: If you have violated the assumption of homogeneity of variance-covariance matrices (see Assumption #9 above), the results from Levene’s test of equality of variances can inform you which dependent variable might be causing the problem (i.e., the dependent variable(s) that have unequal variances).

Interpreting Results

After running the two-way MANOVA procedure and testing that your data meet the assumptions of a two-way MANOVA, SPSS Statistics will have generated a number of tables that contain all the information you need to report the results of your two-way MANOVA.

The two-way MANOVA has two main objectives: (a) to determine whether there is a statistically significant interaction effect between the two independent variables on the combined dependent variables; and (b) if so, run follow up tests to determine where the differences lie. Both of these objectives will be answered in the following sections:

  • Determining whether an interaction effect exists: In evaluating the main two-way MANOVA results, you can start by determining if there is a statistically significant interaction effect between the two independent variables on the combined dependent variables. There are four different multivariate statistics that can be used to test statistical significance when using SPSS Statistics (i.e., Pillai’s TraceWilks’ LambdaHotelling’s Trace and Roy’s Largest Root). We will explain which to choose and how to interpret these statistics.
  • Univariate interaction effects and simple main effects: If the interaction is statistically significant, one typical approach is to determine whether there are any statistically significant univariate interaction effects for each dependent variable separately (Pituch & Stevens, 2016). SPSS Statistics will have run these statistics for you (i.e., a two-way ANOVA for each dependent variable). If you have any statistically significant interaction effects, you can follow these up with simple main effects. We will explain how to interpret these follow-up tests.
  • Main effects and univariate main effects: If your interaction effect is not statistically significant you would follow up the main effects instead. If you have statistically significant main effects, you can follow these up with univariate main effects. We will explain how to interpret these follow-up tests.

Contact Us 24/7 for a Free Consultation

Call Us Now at (669) 900-1414!

Stay current on your favorite topics

Categories
Help

Multiple Regression Analysis

A multiple regression is used to predict a continuous dependent variable based on multiple independent variables. As such, it extends simple linear regression, which is used when you have only one continuous independent variable. Multiple regression also allows you to determine the overall fit (variance explained) of the model and the relative contribution of each of the predictors to the total variance explained.

Note 1: The dependent variable can also be referred to as the “outcome”, “target” or “criterion” variable, whilst the independent variables can be referred to as “predictor”, “explanatory” or “regressor” variables. It does not matter which of these you use, but we will continue to use “dependent variable” and “independent variable” for consistency.

Note 2: This guide deals with “standard” multiple regression rather than a specific type of multiple regression, such as hierarchical multiple regression, stepwise regression, amongst others.

For example, you could use multiple regression to understand whether exam performance can be predicted based on revision time, test anxiety, lecture attendance, course studied and gender. Here, your continuous dependent variable would be “exam performance”, whilst you would have three continuous independent variables – “revision time”, measured in hours, “test anxiety”, measured using the TAI index, “lecture attendance”, measured as a percentage of classes attended – one nominal variable – course studied, which as four groups: business, psychology, biology and mechanical engineering – and one dichotomous independent variable – gender, which has two groups: “males” and “females”. You could also use multiple regression to determine how much of the variation in exam performance can be explained by revision time, test anxiety, lecture attendance, course studied and gender “as a whole”, but also the “relative contribution” of each of these independent variables in explaining the variance.

Assumptions

  • Assumption #1: You have one dependent variable that is measured at the continuous level (i.e., the interval or ratio level). Examples of continuous variables include height (measured in centimetres), temperature (measured in °C), salary (measured in US dollars), revision time (measured in hours), intelligence (measured using IQ score), firm size (measured in terms of the number of employees), age (measured in years), reaction time (measured in milliseconds), grip strength (measured in kg), weight (measured in kg), power output (measured in watts), test performance (measured from 0 to 100), sales (measured in number of transactions per month), academic achievement (measured in terms of GMAT score), and so forth.

Note 1: You should note that SPSS Statistics refers to continuous variables as Scale variables.

Note 2: The dependent variable can also be referred to as the “outcome”, “target” or “criterion” variable. It does not matter which of these you use, but we will continue to use “dependent variable” for consistency.

  • Assumption #2: You have two or more independent variables that are measured either at the continuous or nominal level. Examples of continuous variables are provided above. Examples of nominal variables include gender (e.g., two categories: male and female), ethnicity (e.g., three categories: Caucasian, African American, and Hispanic), physical activity level (e.g., four categories: sedentary, low, moderate and high) and profession (e.g., five categories: surgeon, doctor, nurse, dentist, and therapist).

Note 1: The “categories” of the independent variable are also referred to as “groups” or “levels”, but the term “levels” is usually reserved for the categories of an ordinal variable (e.g., an ordinal variable such as “fitness level”, which has three levels: “low”, “moderate” and “high”). However, these three terms – “categories”, “groups” and “levels” – can be used interchangeably. We refer to them as categories in this guide.

Note 2: An independent variable with only two categories is known as a dichotomous variable whereas an independent variable with three or more categories is referred to as a polytomous variable.

Important: If one of your independent variables was measured at the ordinal level, it can still be entered in a multiple regression, but it must be treated as either a continuous or nominal variable. It cannot be entered as an ordinal variable. Examples of ordinal variables include Likert items (e.g., a 7-point scale from strongly agree through to strongly disagree), physical activity level (e.g., 4 groups: sedentary, low, moderate and high), customer liking a product (ranging from “Not very much”, to “It is OK”, to “Yes, a lot”), and so forth.

  • Assumption #3: You should have independence of observations (i.e., independence of residuals)The assumption of independence of observations in a multiple regression is designed to test for 1st-order autocorrelation, which means that adjacent observations (specifically, their errors) are correlated (i.e., not independent). This is largely a study design issue because the observations in a multiple regression must not be related or you would need to run a different statistical test such as time series methods. In SPSS Statistics, independence of observations can be checked using the Durbin-Watson statistic.
  • Assumption #4: There needs to be a linear relationship between (a) the dependent variable and each of your independent variables, and (b) the dependent variable and the independent variables collectively. The assumption of linearity in a multiple regression needs to be tested in two parts (but in no particular order). You need to (a), establish if a linear relationship exists between the dependent and independent variables collectively, which can be achieved by plotting a scatterplot of the studentized residuals against the (unstandardized) predicted values. You also need to (b), establish if a linear relationship exists between the dependent variable and each of your independent variables, which can be achieved using partial regression plots between each independent variable and the dependent variable (although you can ignore any categorical independent variables; e.g., gender).
  • Assumption #5: Your data needs to show homoscedasticity of residuals (equal error variances)The assumption of homoscedasticity is that the residuals are equal for all values of the predicted dependent variable (i.e., the variances along the line of best fit remain similar as you move along the line). To check for heteroscedasticity, you can use the plot you created to check linearity in the previous section, namely plotting the studentized residuals against the unstandardized predicted values. When you analyze your own data, you will need to plot the studentized residuals against the unstandardized predicted values.
  • Assumption #6: Your data must not show multicollinearityMulticollinearity occurs when you have two or more independent variables that are highly correlated with each other. This leads to problems with understanding which independent variable contributes to the variance explained in the dependent variable, as well as technical issues in calculating a multiple regression model.
    You can use SPSS Statistics to detect for multicollinearity through an inspection of correlation coefficients and Tolerance/VIF values.
  • Assumption #7: There should be no significant outliers, high leverage points or highly influential pointsOutliersleverage and influential points are different terms used to represent observations in your data set that are in some way unusual when you wish to perform a multiple regression analysis. These different classifications of unusual points reflect the different impact they have on the regression line. An observation can be classified as more than one type of unusual point. However, all these points can have a very negative effect on the regression equation that is used to predict the value of the dependent variable based on the independent variables. This can change the output that SPSS Statistics produces and reduce the predictive accuracy of your results as well as the statistical significance. Fortunately, when using SPSS Statistics to run multiple regression on your data, you can detect possible outliers, high leverage points and highly influential points.
  • Assumption #8: You need to check that the residuals (errors) are approximately normally distributed. In order to be able to run inferential statistics (i.e., determine statistical significance), the errors in prediction – the residuals – need to be normally distributed. Two common methods you can use to check for the assumption of normality of the residuals are: (a) a histogram with superimposed normal curve and a P-P Plot; or (b) a Normal Q-Q Plot of the studentized residuals.

Interpreting Results

After running the multiple regression procedure and testing that your data meet the assumptions of a multiple regression in the previous two sections, SPSS Statistics will have generated a number of tables that contain all the information you need to report the results of your multiple regression.

There are three main objectives that you can achieve with the output from a multiple regression: (1) determine the proportion of the variation in the dependent variable explained by the independent variables; (2) predict dependent variable values based on new values of the independent variables; and (3) determine how much the dependent variable changes for a one unit change in the independent variables. All of these objectives will be answered in the following sections.

When interpreting and reporting your results from a multiple regression, we suggest working through three stages: (a) determine whether the multiple regression model is a good fit for the data; (b) understand the coefficients of the regression model; and (c) make predictions of the dependent variable based on values of the independent variables. To recap:

  • First, you need to determine whether the multiple regression model is a good fit for the data: There are a number of statistics you can use to determine whether the multiple regression model is a good fit for the data. These are: (a) the multiple correlation coefficient, (b) the percentage (or proportion) of variance explained; (c) the statistical significance of the overall model; and (d) the precision of the predictions from the regression model.
  • Second, you need to understand the coefficients of the regression model. These coefficients are useful in order to understand whether there is a linear relationship between the dependent variable and the independent variables. In addition, you can use this regression equation to calculate predicted values of VO2max for a given set of values for age, weight, heart rate, and gender.
  • Third, you can use SPSS Statistics to make predictions of the dependent variable based on values of the independent variable: For example, you can use the regression equation from the previous section to predict VO2max for a different set of values for age, weight, heart rate and gender (e.g., the VO2max for a 30 year old male weighing 80 kg with a heart rate of 133 bpm).

Contact Us 24/7 for a Free Consultation

Call us now at (669) 900-1414!

Stay current on your favorite topics

Categories
Help

One-Way RM ANOVA

The one-way repeated measures analysis of variance (ANOVA) is an extension of the paired-samples t-test and is used to determine whether there are any statistically significant differences between the means of three or more levels of a within-subjects factor. The levels are related because they contain the same cases (e.g., participants) in each level. The participants are either the same individuals tested on three or more occasions on the same dependent variable or the same individuals tested under three or more different conditions on the same dependent variable. This test is also referred to as a within-subjects ANOVA or ANOVA with repeated measures.

Note: Whilst a one-way repeated measures ANOVA can be used when your within-subjects factor has just two levels, it is typically only used when the within-subjects factor has three or more levels. The reason for this is that when there are only two levels, a paired-samples t-test is more commonly used. This is why we refer to the one-way repeated measures ANOVA having three or more levels in this guide.

For example, you could use a one-way repeated measures ANOVA to understand whether there is a difference in cigarette consumption amongst heavy smokers after a hypnotherapy program (e.g., with three-time points: cigarette consumption immediately before, 1 month after and 6 months after the hypnotherapy program). In this example, “cigarette consumption” is your dependent variable, whilst your within-subjects factor is “time” (i.e., with three levels, where each of the three-time points is considered a level). Alternately, you could use a one-way repeated measures ANOVA to understand whether there is a difference in braking distance in a car based on four different colored tints of windscreen (e.g., braking distance under four conditions: no tint, low tint, medium tint, and dark tint). In this example, “braking distance” is your dependent variable, whilst your within-subjects factor is “condition” (i.e., with four levels, where each of the four conditions is considered a level).

Note: Whilst the repeated measures ANOVA is used when you have just one within-subjects factor, if you have two within-subjects factors (e.g., you measured time and condition), you will need to use a two-way repeated measures ANOVA, also known as a within-within-subjects ANOVA.

Assumptions

In order to run a one-way repeated measures ANOVA, there are five assumptions that need to be considered. The first two relate to your choice of study design, whilst the other three reflect the nature of your data. These assumptions are:

  • Assumption #1: You have one dependent variable that is measured at the continuous level (i.e., it is measured at the interval or ratio level). Examples of continuous variables include revision time (measured in hours), intelligence (measured using IQ score), exam performance (measured from 0 to 100), weight (measured in kg), and so forth. You can learn more about continuous variables in our article: Types of Variable.
  • Assumption #2: You have one within-subjects factor that consists of three or more categorical levels. These are two particularly important terms that you will need to understand in order to work through this guide; that is, a “within-subjects factor” and “levels”. Both terms are explained below: A factor is another name for an independent variable. However, we use the term “factor” instead of “independent variable” throughout this guide because in a repeated measures ANOVA, the independent variable is often referred to as the within-subjects factor. The “within-subjects” part simply means that the same cases (e.g., participants) are either: (a) measured on the same dependent variable at the same “time points”; or (b) measured on the same dependent variable whilst undergoing the same “conditions” (also known as “treatments”). For example, you might have measured 10 individuals’ 100 m sprint times (the dependent variable) on five occasions (i.e., five time points) during the athletics season to determine whether their sprint performance improved. Alternately, you may have measured 20 individuals’ task performance (the dependent variable) when working in three different lighting conditions (e.g., red, blue and natural lighting) to determine whether task performance was affected by the colour lighting in the room. For now, all you need to know is that a within-subjects factor is another name for an independent variable in a one-way repeated measures ANOVA where the same cases (e.g., participants) are measured on the same dependent variable on three or more occasions.When referring to a within-subjects factor, we also talk about it having “levels”. More specifically, a within-subjects factor has “categorical” levels, which means that it is measured on a nominal or ordinal scale. Such ordinal variables in a one-way repeated measures ANOVA are typically three or more “time points” (e.g., three-time points where the dependent variable is measured: “pre-intervention”, “post-intervention” and “6-month follow-up”; or four-time points where the dependent variable is measured: at “10 secs”, “20 secs”, “30 sec” and “40 secs”). Such nominal variables in a one-way repeated measures ANOVA are typically three or more “conditions” (e.g., three conditions where the dependent variable is measured: a “control”, “intervention A” and “intervention B”; or four conditions where the dependent variable is measured: in a room with “red lighting”, “blue lighting”, yellow lighting” and “natural lighting”). The number of time points or conditions is referred to as “levels” of the ordinal or nominal variable (e.g., three-time points reflects three levels). Therefore, when we refer to a “level” of a within-subjects factor in the guide, we are only referring to “one” level (e.g., the room with “red lighting” or the room with “blue lighting”). However, when we refer to “levels” of a within-subjects factor, we are referring to “two or more” levels (e.g., “red and blue” lighting, or “red, blue and yellow” lighting).

Note: Whilst a one-way repeated measures ANOVA can be used when your within-subjects factor has just two categorical levels, it is typically only used when the within-subjects factor has three or more categorical levels. The reason for this is that when there are only two categorical levels, a paired-samples t-test is more commonly used. This is why we refer to the one-way repeated measures ANOVA having three or more levels in this guide.

  • Assumption #3: There should be no significant outliers in any level of the within-subjects factorOutliers are simply single data points within your data that do not follow the usual pattern (e.g., in a study of 100 students’ IQ scores, where the mean score was 108 with only a small variation between students, one student had a score of 156, which is very unusual, and may even put her in the top 1% of IQ scores globally). The problem with outliers is that they can have a negative impact on the one-way repeated measures ANOVA by (a) distorting the differences between the levels of the within-subjects factor (whether increasing or decreasing the scores on the dependent variable), and (b) causing problems when generalizing the results (of the sample) to the population.
  • Assumption #4: Your dependent variable should be approximately normally distributed for each level of the within-subjects factorThe assumption of normality is necessary for statistical significance testing using a one-way repeated measures ANOVA. However, the one-way repeated measures ANOVA is considered “robust” to violations of normality. This means that some violations of this assumption can be tolerated and the test will still provide valid results. Therefore, you will often hear of this test only requiring approximately normal data. Furthermore, as sample size increases, the distribution can be very non-normal and, thanks to the Central Limit Theorem, the one-way repeated measures ANOVA can still provide valid results. Also, it should be noted that if the distributions are all skewed in a similar manner (e.g., all moderately negatively skewed), this is not as troublesome as when compared to the situation where you have levels that have differently-shaped distributions (e.g., not all levels of a within-subjects factor are moderately negatively skewed). Therefore, in this example, you need to investigate whether CRP is normally distributed for each level of the within-subjects factor, time. In other words, whether crp_pre, crp_mid and crp_post are normally distributed.

Note: Technically, it is the residuals (errors) that need to be normally distributed. However, for a repeated measures one-way ANOVA the distribution of the scores (observations) in each level of the within-subjects factor will be the same as the distribution of the residuals in each level.

  • Assumption #5: Known as sphericity, the variances of the differences between all combinations of levels of the within-subjects factor must be equal. Unfortunately, it is considered difficult not to violate the assumption of sphericity (e.g., Weinfurt, 2000), which causes the test to become too liberal (i.e., leads to an increase in the Type I error rate; that is, the probability of detecting a statistically significant result when there isn’t one). Fortunately, SPSS Statistics makes it easy to test whether your data has met or failed this assumption using Mauchly’s test of sphericity.

Interpreting Results

After running the one-way repeated measures ANOVA procedure with either post hoc tests or planned contrasts in the previous section, SPSS Statistics will have generated a number of tables that contain all the information you need to interpret and report your results. we show you how to interpret these results.

You will get some useful descriptive statistics from the SPSS Statistics output that will help you get a “feel” for your data (and will also be used when you report your results). This includes information on sample size, which levels of the within-subjects factor had the higher/lower mean score and if there are any trends, and if the variation in each level is similar.

If you have been following this guide from the very beginning, you’ll know that how you interpret your results after running a one-way repeated measures ANOVA depends on whether your data met or violated the assumption of sphericity.  We show you how to interpret this critical assumption for your data, which will determine what you do next:

  • Sphericity was met: If your data has met the assumption of sphericity, you simply need to interpret the ‘standard’ one-way repeated measures ANOVA output in SPSS Statistics. We will (a) interpret the SPSS Statistics output for the one-way repeated measures ANOVA, including the means, standard deviations, F-value, degrees of freedom and p-value; (b) determine whether the means of the dependent variable are statistically significantly different for the different levels of the within-subjects factor; (c) determine if we can reject, or fail to reject, the null hypothesis; and (d) how you can bring all of this together into a single paragraph that explains your results. You can also add an effect size to your analysis, which is becoming a more common method of expressing your results. We will teach you how to calculate an effect size from your one-way repeated measures ANOVA results, and how to add this to your write-up.
  • Sphericity was violated: If your data has violated the assumption of sphericity, you can still continue with your analysis. However, you will have to interpret the results from a modified one-way repeated measures ANOVA where there have been adjustments to the degrees of freedom for both the within-subjects factor and error effect (Greenhouse & Geisser, 1959), which has an impact on the statistical significance (i.e., p-value) of the test. Therefore, we will (a) interpret the SPSS Statistics output for the Greenhouse and Geisser (1959) adjusted one-way repeated measures ANOVA, explaining the means, standard deviations, F-value, degrees of freedom and p-value; (b) determine whether the means of the dependent variable are statistically significantly different for the different levels of the within-subjects factor; (c) determine if we can reject, or fail to reject, the null hypothesis; and (d) bring all of this together into a single paragraph that explains your results. We can also add an effect size to our analysis, which is becoming a more common method of expressing your results. Therefore, we will teach you how to calculate an effect size from your one-way repeated measures ANOVA results, and how to add this to your write-up.

Contact Us 24/7 for a Free Consultation

Call Us Now at (669) 900-1414!

Stay current on your favorite topics

Categories
Help

Chi-Square Test

The chi-square test can be used to test a variety of sizes of contingency tables, as well as more than one type of null and alternative hypotheses. This guide focuses on contingency tables that are greater than 2 x 2, which are often referred to as r x c contingency tables, and tests whether two variables measured at the nominal level are independent (i.e., whether there is an association between the two variables). Most commonly this test is called the chi-square test of independence, but it is also known as the chi-square test for association. Whilst it is also possible to perform the chi-square test of independence on ordinal variables, you will lose the ordered nature of the data by doing so and there will most likely be more suitable tests to run (see our Statistical Test Selector). In order to make the correct inferences from a chi-square test of independence you will need to have undertaken a naturalistic study design.

Note: If you are interested in understanding (and modelling) associations between three or more categorical variables you should consider loglinear analysis instead of the chi-square test of independence.

For example, you could use a chi-square test of independence to determine whether there is an association between the political party a person votes for in the United Kingdom and their housing tenure (i.e., your two nominal variables would be “political affiliation”, which has five categories – “Conservatives”, “Labour”, “UKIP”, the “Liberal Democrats” and “Green Party” – and “housing tenure”, which also has four categories: “Own home”, “Mortgaged home”, “Private renter” and “Social housing renter”. If there is an association (positive or negative), you can also determine the strength/magnitude of this association. Alternately, you could use a chi-square test of independence to determine whether there is an association between the preferred brand of luxury car and the country of buyers (i.e., your two nominal variables would be “luxury car brand preference”, which has five categories – Audi, BMW, Land Rover, Mercedes and Porsche – and buyer country, which has five categories: “United Kingdom”, “France”, “Germany”, “Italy” and “Spain”. Again, if there is an association (positive or negative), you can also determine the strength/magnitude of this association.

Assumptions

In order to run a chi-square test of independence, there are four assumptions that need to be considered. The first three assumptions relate to how you measured your variables, whilst the fourth assumption relates to how the data fits the chi-square test of independence model. These assumptions are:

  • Assumption #1: You have two nominal variables. Examples of nominal variables include ethnicity (e.g., three groups: Caucasian, African American and Hispanic), seasons (e.g., four groups: “spring”, “summer”, autumn” and “winter”), profession (e.g., five groups: surgeon, doctor, nurse, dentist, therapist), and so forth. If you need more information about variables and their different types of measurement, see our Types of Variables guide.

Explanation: The “groups” of a categorical variable are also referred to as “categories” or “levels“, but the term “levels” is usually reserved for groups that have an order (e.g., fitness level, with three levels: “low”, “moderate” and “high”). However, these three terms – “groups”, “categories” and “levels” – can be used interchangeably. We will mostly refer to them as categories, but in some cases, we will refer to them as groups or levels. The only reason we do this is for clarity (i.e., it sometimes sounds more appropriate in a sentence to use groups or levels instead of categories, and vice versa).

Important: Whilst a chi-square test of independence can be used with ordinal variables, it is strictly a test for nominal variables. Therefore, even though you can use a chi-square test of independence with ordinal variables, the chi-square test of independence will treat them as nominal variables, and you will lose their ordered nature. If you have ordinal variables and want to keep their ordered nature, there are alternative statistical tests you can use, such as Kendall’s tauSpearman’s correlation, and linear-by-linear association, amongst others. See the Associations route of our Statistical Test Selector to help you choose the appropriate test.

Note: If you have three or more categorical variables rather than just two categorical variables, a loglinear analysis can be used instead of a chi-square test of independence.

If your study fails this assumption, you will need to use another statistical test instead of a chi-square test of independence (you can use our Statistical Test Selector to find the appropriate statistical test).

  • Assumption #2: You should have independence of observations, which means that there is no relationship between the observations in each group of each variables or between the groups themselves. Indeed, an important distinction is made in statistics when comparing values from either different individuals or from the same individuals. Independent groups (in a chi-square test of independence) are groups where there is no relationship between the participants in either of the groups. Most often, this occurs simply by having different participants in each group.
    For example, if you split a group of individuals into two groups based on their gender (i.e., a male group and a female group), no one in the female group can be in the male group and vice versa. As another example, you might randomly assign participants to either a control trial or an intervention trial. Again, no participant can be in both the control group and the intervention group. This will be true of any two independent groups you form (i.e., a participant cannot be a member of both groups). In actual fact, the ‘no relationship’ part extends a little further and requires that participants in both groups are considered unrelated, not just different people; for example, participants might be considered related if they are husband and wife, or twins. Furthermore, participants in Group A cannot influence any of the participants in Group B, and vice versa.
  • Assumptions #3: The null hypothesis being tested using the chi-square test of independence in this guide cannot be used with all types of sampling (i.e., study design). This is explained in more detail in the section, Sampling and the chi-square test of independence, further down this page.
  • Assumptions #4: As will be discussed further in our Assumptions section, the chi-square test of independence must also meet one assumption that relates to the nature of your data in order to provide a valid result: all cells should have expected counts greater than or equal to five.

Interpreting Results

SPSS Statistics will have generated all the information you need to report the results of the chi-square test of independence, Cramer’s V to determine the strength/magnitude of any association, and where appropriate, adjusted standardized residuals to determine which cells deviate from independence. In this section, we explain how to interpret these results. We also show how to write up your results as you work through the section.

On the following three pages we interpret the results as follows:

  • Sample characteristics and crosstabulation: We will check the characteristics of the sample you have just tested, and (b) discuss how to interpret the crosstabulation and observed and expected frequencies for each cell of the design. This includes interpreting the observed counts, how the observed counts can be viewed as percentages and proportions, and the usefulness of comparing the observed and expected counts before interpreting the chi-square test of independence result.
  • Chi-square test of independence and strength of association: If you have an adequate sample size to run and interpret the chi-square test of independence, we can show you how to do this. We will explain: (a) whether you have a statistically significant chi-square test of independence result; (b) on this basis, whether you should reject the null hypothesis and accept the alternative hypothesis, or fail to reject the null hypothesis and reject the alternative hypothesis; and (c) the strength/magnitude of any association using Cramer’s V, which is a measure that provides an estimate of the strength of the association between your two variables.
  • Post hoc testing using adjusted standardized residuals: If the chi-square test of independence was statistically significant, we will show you how to determine which cells of your design deviate from independence using adjusted standardized residuals.

Contact Us 24/7 for a Free Consultation

Call us now at (669) 900-1414!

Stay current on your favorite topics

Categories
Help

Two-Way ANOVA

The two-way ANOVA is used to determine whether there is an interaction effect between two independent variables on a continuous dependent variable (i.e., if a two-way interaction effect exists). In many ways, the two-way ANOVA can be considered as an extension of the one-way ANOVA, which deals with just one independent variable rather than the two-way independent variables of the two-way ANOVA.

Note: It is quite common for the independent variables to be called “factors” or “between-subjects factors”, but we will continue to refer to them as independent variables in this guide. Furthermore, it is worth noting that the two-way ANOVA is also referred to as a “factorial ANOVA” or, more specifically, as a “two-way between-subjects ANOVA”.

A two-way ANOVA can be used in a number of situations. For example, consider an experiment where two drugs were being given to elderly patients to treat heart disease. One of the drugs was the current drug being used to treat heart disease and the other was an experimental drug that the researchers wanted to compare to the current drug. The researchers also wanted to understand how the drugs compared in low and high-risk elderly patients. The goal was for the drugs to lower cholesterol concentration in the blood. The two independent variables are drug with two levels (“Current” and “Experimental”) and risk with two levels (“Low” and “High”). The dependent variable was cholesterol (i.e., cholesterol concentration in the blood). The researchers want to know: (a) whether the experimental drug is better or worse than the current drug at lowering cholesterol; and (b) whether the effect of the two drugs is different depending on whether elderly patients are classified at low or high risk. These two aims are entirely typical of a two-way ANOVA analysis. Importantly, the second aim of these researchers is answered by determining whether there is a statistically significant interaction effect. This is usually given first priority in a two-way ANOVA analysis because its result will determine whether the researchers’ first aim is misleading or incomplete. Assuming that a statistically significant interaction effect is found, this indicates that the two drugs have different effects in low and high-risk elderly patients (i.e., the effect of drug on cholesterol depends on level of risk). Depending on whether you find a statistically significant interaction, and the type of interaction, will determine which effects in the two-way ANOVA you should interpret and any post hoc tests you may want to run. These issues are explained as you work through the guide, so you will know exactly what each statistical test is telling you and how to write up your results accurately.

Note: A two-way ANOVA can be described by the number of groups in each independent variable. So, for example, if you had a two-way ANOVA with gender (male/female) and transport type (bus/train/car) as the independent variables, you could describe this as a 2 x 3 ANOVA. This is a fairly generic way to describe ANOVAs with two or more between-subjects factors (e.g., a three-way ANOVA could be written 2 x 3 x 5 ANOVA if another independent variable was included which had five groups).

Assumptions

In order to run a two-way ANOVA, there are six assumptions that need to be considered. The first three assumptions relate to your choice of study design and the measurements you chose to make, whilst the second three assumptions relate to how your data fits the two-way ANOVA model. These assumptions are:

  • Assumption #1: You have one dependent variable that is measured at the continuous level (i.e., the interval or ratio level). Examples of continuous variables include height (measured in meters and centimeters), temperature (measured in °C), salary (measured in US dollars), revision time (measured in hours), intelligence (measured using IQ score), firm size (measured in terms of the number of employees), age (measured in years), reaction time (measured in milliseconds), grip strength (measured in kg), weight (measured in kg), power output (measured in watts), test performance (measured from 0 to 100), sales (measured in number of transactions per month), academic achievement (measured in terms of GMAT score), and so forth.

Note: You should note that SPSS Statistics refers to continuous variables as Scale variables.

  • Assumption #2: You have two independent variables where each independent variable consists of two or more categoricalindependent groups. An independent variable with only two groups is known as a dichotomous variable whereas an independent variable with three or more groups is referred to as a polytomous variable. Example independent variables that meet this criterion include gender (e.g., two groups: male and female), ethnicity (e.g., three groups: Caucasian, African American, and Hispanic), physical activity level (e.g., four groups: sedentary, low, moderate and high), profession (e.g., five groups: surgeon, doctor, nurse, dentist, therapist), and so forth. If you need more information about variables and their different types of measurement, please contact us).

Explanation 1: The “groups” of the independent variable are also referred to as “categories” or “levels”, but the term “levels” is usually reserved for groups that have an order (e.g., fitness level, with three levels: “low”, “moderate” and “high”). However, these three terms – “groups”, “categories” and “levels” – can be used interchangeably. We will mostly refer to them as groups, but in some cases, we will refer to them as levels. The only reason we do this is for clarity (i.e., it sometimes sounds more appropriate in a sentence to use levels instead of groups, and vice versa).

Explanation 2: The independent variable(s) in any type of ANOVA is also commonly referred to as a factor. For example, a two-way ANOVA is an ANOVA analysis involving two factors (i.e., two independent variables). Furthermore, when an independent variable/factor has independent groups (i.e., unrelated groups), it is further classified as a between-subjects factor because you are concerned with the differences in the dependent variable between different subjects. However, for clarity, we will simply refer to them as independent variables in this guide.

Note: For the two-way ANOVA demonstrated in this guide, the independent variables are referred to as fixed factors or fixed effects. This means that the groups of each independent variable represent all the categories of the independent variable you are interested in. For example, you might be interested in exam performance differences between schools. If you investigated three different schools and it was only these three schools that you were interested in, the independent variable is a fixed factor. However, if you picked the three schools at random and they were meant to represent all schools, the independent variable is a random factor. This requires a different statistical test because the two-way ANOVA is the incorrect statistical test in these circumstances. If you have a random factor in your study design, please contact us and we will look at adding an SPSS Statistics guide to help with this.

  • Assumption #3: You should have independence of observations, which means that there is no relationship between the observations in each group of the independent variable or between the groups themselves. Indeed, an important distinction is made in statistics when comparing values from either different individuals or from the same individuals. Independent groups (in a two-way ANOVA) are groups where there is no relationship between the participants in any of the groups. Most often, this occurs simply by having different participants in each group.

Note: When we talk about the observations being independent, this means that the observations (e.g., participants) are not related. Specifically, it is the errors that are assumed to be independent. In statistics, errors that are not independent are often referred to as correlated errors. This can lead to some confusion because of the similarity of the name to that of tests of correlation (e.g., Pearson’s correlation), but correlated errors simply means that the errors are not independent. The errors are at high risk of not being independent if the observations are not independent.

For example, if you split a group of individuals into four groups based on their physical activity level (e.g., a “sedentary” group, “low” group, “moderate” group and “high” group), no one in the sedentary group can also be in the high group, no one in the moderate group can also be in the high group, and so forth. As another example, you might randomly assign participants to either a control trial or one of two interventions. Again, no participant can be in more than one group (e.g., a participant in the the control group cannot be in either of the intervention groups). This will be true of any independent groups you form (i.e., a participant cannot be a member of more than one group). In actual fact, the ‘no relationship’ part extends a little further and requires that participants in different groups are considered unrelated, not just different people. Furthermore, participants in one group cannot influence any of the participants in any other group.

An example of where related observations might be a problem is if all the participants in your study (or the participants within each group) were assessed together, such that a participant’s performance affects another participant’s performance (e.g., participants encourage each other to lose more weight in a ‘weight loss intervention’ when assessed as a group compared to being assessed individually; or athletic participants being asked to complete ‘100m sprint tests’ together rather than individually, with the added competition amongst participants resulting in faster times, etc.). This can occur when you have tested individuals in blocks (e.g., 10 participants at a time) to make life easier for yourself or due to other constraints. However, the participants in each block might provide more similar results than those from other blocks. Participants might also be considered related due to their inherent or preselected attributes. For example, your sample may consist of twins or a husband and wife, and yet you may have considered them to be unrelated when they are related. Alternately, you have repeatedly tested the same participant and not expected him or her to react more similarly than another participant. If you are using the same participants in each group or they are otherwise related, a two-way repeated measures ANOVA is a more appropriate test (or a two-way mixed ANOVA if only one of your independent variables consists of related groups).

Independence of observations is largely a study design issue rather than something you can test for using SPSS Statistics, but it is an important assumption of the two-way ANOVA. If your study fails this assumption, you will need to use another statistical test instead of the two-way ANOVA.

  • Assumption #4: There should be no significant outliers in any cell of the design. Outliers are simply data points within your data that do not follow the usual pattern (e.g., in a study of 100 students’ IQ scores, where the mean score was 108 with only a small variation between students, one student had a score of 156, which is very unusual, and may even put her in the top 1% of IQ scores globally). The problem with outliers is that they can have a negative impact on the two-way ANOVA by: (a) distorting the differences between cells of the design; and (b) causing problems when generalizing the results (of the sample) to the population.
  • Assumption #5: Your dependent variable (residuals) should be approximately normally distributed for each cell of the design. The assumption of normality is necessary for statistical significance testing using a two-way ANOVA. However, the two-way ANOVA is considered “robust” to violations of normality. This means that some violation of this assumption can be tolerated and the test will still provide valid results. Therefore, you will often hear of this test only requiring approximately normally distributed data. Furthermore, as sample size increases, the distribution can be quite non-normal and, thanks to the Central Limit Theorem, the two-way ANOVA can still provide valid results. Unfortunately, how large is large enough is not well known (e.g., Wilcox, 2012a). Also, it should be noted that if the distributions are all skewed in a similar manner (e.g., all moderately negatively skewed), this is not as troublesome when compared to the situation where you have groups that have differently-shaped distributions (e.g., each combination of groups has different skews). Again, technically, the assumption of normality is with respect to the residuals and not the raw data. Therefore, in this example, you need to investigate whether the residuals, RES_1, are normally distributed in each cell of the design.
  • Assumption #6: The variance of your dependent variable (residuals) should be equal in each cell of the design. This assumption is referred to as the assumption of homogeneity of variances. It requires that the (population) variance of the residuals, RES_1, is the same in each cell of the design. This assumption is necessary for statistical significance testing in the two-way ANOVA. Although this assumption can be violated a little in studies with equal, but not small, sample sizes in each cell of the design, it is still considered an important assumption. You can determine whether this assumption is met using Levene’s test for equality of variances.

Interpreting Results

After running the two-way ANOVA procedure in the Procedure section earlier, SPSS Statistics will have generated a number of tables and graphs that provide the starting point to interpret your results.

You will find a general overview of the procedure that needs to be taken to analyze your results depending on the result of your interaction effect. It is intended to give you a clearer image of the route that you might follow in this Interpreting Results section. In this regard, there are two main steps you can follow to interpret the results of your two-way ANOVA. First, you need to determine whether a statistically significant interaction effect exists (STEP #1). This starts the process of interpreting your results. Both main steps are briefly explained below.

  • Step #1
    Do you have a statistically significant interaction effect? The primary goal of running a two-way ANOVA is to determine whether there is an interaction between the two independent variables on the dependent variable. One of these two independent variables can act as a focal variable and the other as a moderator variable depending on your study design. For example, we wanted to determine whether there was an interaction between the two independent variables, gender, and education_level, on the dependent variable, political_interest. Our focal variable was education level (i.e., education_level) and our moderator variable was gender (i.e., gender). If yes – you have a statistically significant interaction effect – go to STEP 2A.
    If no – you do not have a statistically significant interaction effect – go to STEP 2B.
  • Step #2A
    You have a statistically significant interaction effect. Do you have any statistically significant simple main effects or interaction contrasts? When the interaction term is statistically significant, this indicates that the effect that one independent variable (e.g., education_level) has on the dependent variable (e.g., political_interest) depends on the level of the other independent variable (e.g., gender).In our example, this means that we are comparing two effects: (a) the effect of education level on interest in politics in males; and (b) the effect of education level on interest in politics in females. These two effects are called simple effects or, more commonly, simple main effects (i.e., there are two simple main effects: one for males and one for females). Therefore, if you have a statistically significant interaction effect you can follow up this result by running simple main effects.

Note: It is also possible to follow up with a statistically significant interaction with interaction effects rather than simple main effects.

  • Step #2B
    You do not have a statistically significant interaction effect. Do you have any statistically significant main effects? When you do not have a statistically significant interaction effect, this indicates that the effect of an independent variable is the same for each level of the other independent variable. In other words, the simple main effects, mentioned in Step #2A above, are all equal. In our example, if there was not a statistically significant interaction effect, this would mean that the effect of education level on interest in politics is the same for males and females. As such, it might make sense to consider these simple main effects together and come up with an overall measure of the effect of education level ignoring gender. That is, why bother separating out the effects for males and females when they are the same? Just consider them together. We can do this by “averaging” the simple main effects. This is called a main effect. It is customary to interpret the main effects and, if a main effect is statistically significant, follow up with a post hoc analysis (e.g., all pairwise comparisons).

Contact Us 24/7 for a Free Consultation

Call Us Now at (669) 900-1414!

Stay current on your favorite topics

Categories
Help

One-Way ANCOVA

The analysis of covariance (ANCOVA) can be thought of as an extension of the one-way ANOVA to incorporate a covariate variable. This covariate is linearly related to the dependent variable and its inclusion into the analysis can increase the ability to detect differences between groups of an independent variable. An ANCOVA is used to determine whether there are any statistically significant differences between the adjusted population means of two or more independent (unrelated) groups.

For example, you could use a one-way ANCOVA to determine whether exam performance differed based on test anxiety levels amongst students whilst controlling for revision time (i.e., your dependent variable would be “exam performance”, measured from 0-100, your independent variable would be “test anxiety level”, which has three groups – “low-stressed students”, “moderately-stressed students” and “highly-stressed students” – and your covariate would be “revision time”, measured in hours). You want to control for revision time because you believe that the effect of test anxiety levels on exam performance will depend, to some degree, on the amount of time students spent revising.

Assumptions

In order to run a one-way ANCOVA, there are ten assumptions that need to be considered. The first four assumptions relate to your choice of study design and the measurements you chose to make, whilst the second six assumptions relate to how your data fits the one-way ANCOVA model. These assumptions are:

  • Assumption #1: You have one dependent variable that is measured at the continuous level. Examples of continuous variables include include height (measured in centimetres), temperature (measured in °C), salary (measured in US dollars), revision time (measured in hours), intelligence (measured using IQ score), age (measured in years), reaction time (measured in milliseconds), grip strength (measured in kg), power output (measured in watts), test performance (measured from 0 to 100), sales (measured in number of transactions per month), academic achievement (measured in terms of GMAT score), and so forth.
  • Assumption #2: You have one independent variable that consists of two or more categoricalindependent groups. Example independent variables that meet this criterion include ethnicity (e.g., three groups: Caucasian, African American and Hispanic), physical activity level (e.g., four groups: sedentary, low, moderate and high), profession (e.g., five groups: surgeon, doctor, nurse, dentist, therapist), and so forth.

Note 1: The “groups” of the independent variable are also referred to as “categories” or “levels”, but the term “levels” is usually reserved for groups that have an order (e.g., fitness level, with three levels: “low”, “moderate” and “high”).

Note 2: If you have two independent variables rather than just one, and this second independent variable is not another covariate (see Assumption #3 below), you should consider a two-way ANCOVA instead of a one-way ANCOVA.

  • Assumption #3: You have one covariate variable that is measured at the continuous level (see Assumption #1 for examples of continuous variables). A covariate is simply a continuous independent variable that is added to an ANOVA model to produce an ANCOVA model. This covariate is used to adjust the means of the groups of the categorical independent variable. It acts no differently than in a normal multiple regression, but is usually of less direct importance (i.e., the coefficient and other attributes are often of secondary importance or not at all). In an ANCOVA the covariate is generally only there to provide a better assessment of the differences between the groups of the categorical independent variable on the dependent variable.

Important: You can have many continuous covariates in a one-way ANCOVA, but we only show you how to analyze a design with one continuous covariate in this guide. We will be adding a separate guide to the site to help with multiple continuous covariates, so if this is of interest, please contact us and we will email you when the guide becomes available.

Note: If your covariate is not a continuous variable, but is a categorical variable with two or more categorical, independent groups, like the independent variable in Assumption #2 above, please contact us. Your covariate does not have to be continuous, but the analysis is not always then called ANCOVA. Therefore, we will be adding a separate guide to the site to help with this situation.

  • Assumption #4: You should have independence of observations, which means that there is no relationship between the observations in each group of the independent variable or between the groups themselves. Indeed, an important distinction is made in statistics when comparing values from either different individuals or from the same individuals. Independent groups (in a one-way ANCOVA) are groups where there is no relationship between the participants in any of the groups. Most often, this occurs simply by having different participants in each group. For example, if you split a group of individuals into four groups based on their physical activity level (e.g., a “sedentary” group, “low” group, “moderate” group and “high” group), no one in the sedentary group can also be in the high group, no one in the moderate group can also be in the high group, and so forth. As another example, you might randomly assign participants to either a control trial or one of two interventions. Again, no participant can be in more than one group (e.g., a participant in the control group cannot be in either of the intervention groups). This will be true of any independent groups you form (i.e., a participant cannot be a member of more than one group). In actual fact, the ‘no relationship’ part extends a little further and requires that participants in different groups are considered unrelated, not just different people (e.g., participants might be considered related if they are husband and wife, or twins). Furthermore, participants in one group cannot influence any of the participants in any other group.It is also fairly common to hear this type of study design, with two or more independent groups, being referred to as “between-subjects” because you are concerned with the differences in the dependent variable between different subjects. An example of where related observations might be a problem is if all the participants in your study (or the participants within each group) were assessed together, such that a participant’s performance affects another participant’s performance (e.g., participants encourage each other to lose more weight in a ‘weight loss intervention’ when assessed as a group compared to being assessed individually; or athletic participants being asked to complete ‘100m sprint tests’ together rather than individually, with the added competition amongst participants resulting in faster times, etc.). Independence of observations is largely a study design issue rather than something you can test for, but it is an important assumption of the one-way ANCOVA. If your study fails this assumption, you will need to use another statistical test instead of the one-way ANCOVA.
  • Assumption #5: The covariate should be linearly related to the dependent variable at each level of the independent variable. The first assumption you need to test for is whether there is a linear relationship between the covariate, pre, and the dependent variable, post, for each level of the independent variable, group. In the one-way ANCOVA model, it is assumed that the covariate, pre, is linearly related to the dependent variable, post, for all groups of the independent variable, group. To test this assumption you can plot a grouped scatterplot of the dependent variable, post, against the covariate, pre, grouped on the independent variable, group. You can also add lines of best fit for each group for extra clarity. We show you how to plot a grouped scatterplot using the Chart Builder… procedure in SPSS Statistics, as well as explain how to interpret the output.
  • Assumption #6: You should have homogeneity of regression slopes.This assumption checks that there is no interaction between the covariate, pre, and the independent variable, group. Put another way, the regression lines you plotted for Assumption #5 above must be parallel (i.e., they must have the same slope). However, whilst this grouped scatterplot will give you an indication of whether the slopes are parallel, you should test this assumption statistically by determining whether there is a statistically significant interaction term, group*pre. One of the reasons for this is that you might not expect the lines to always be parallel as they are plots of the sample data and the assumption applies to the population regression lines – you will always expect some deviation.By default, SPSS Statistics does not include an interaction term between a covariate and an independent variable in its GLM Univariate procedure. If you contact us, we will show you how to specifically request this term in the model using the Univariate… procedure to determine if there is a statistically significant interaction term, before showing you how to determine if you have homogeneity of regression slopes.

Interpreting Results

After running the one-way ANCOVA procedures and testing that your data meet the assumptions of a one-way ANCOVA in the previous sections, SPSS Statistics will have generated a number of tables that contain all the information you need to report the results of your one-way ANCOVA. We show you how to interpret these results.

The one-way ANCOVA has two main objectives: (1) to determine whether the independent variable is statistically significant in terms of the dependent variable; and (2) if so, determine where any differences in the groups of the independent variable lie. Both of these objectives will be answered in the following sections:

  • Descriptive statistics and estimates: You can start your analysis by getting an overall impression of what your data is showing through the descriptive statistics and estimates (the “Descriptive Statistics” and “Estimates” tables). The Descriptive Statistics table presents the mean, standard deviation and sample size for the dependent variable, post, for the different groups of the independent variable, group. You can use this table to understand certain aspects of your data, such as: (a) whether there are an equal number of participants in each of your groups; (b) which groups had the higher/lower mean score (and what this means for your results); and (c) if the variation in each group is similar. However, these values do not include any adjustments made by the use of a covariate in the analysis, which is important. Therefore, you need to consult the Estimates table where the mean values of the groups of the independent variable have been adjusted by the covariate, pre. These values are called adjusted means because they have been adjusted by the covariate.
  • One-way ANCOVA results: In evaluating the main one-way ANCOVA results, you can start by determining the overall statistical significance of the model; that is, whether the (adjusted) group means are statistically significantly different (i.e., is the independent variable statistically significant?). In our example, we want to determine whether there was an overall statistically significant difference in post-intervention cholesterol concentration (post) between the different interventions (group) once their means had been adjusted for pre-intervention cholesterol concentrations (pre). This is achieved by interpreting the Tests of Between-Subjects Effects table, which contains the main results of the one-way ANCOVA.
  • Post hoc tests: If there is a statistically significant difference between the adjusted means (i.e., your independent variable is statistically significant), you can use a Bonferroni post hoc test to determine where exactly the differences lie. By inspecting the Pairwise Comparisons table, you can determine whether cholesterol concentration was, for example, statistically significantly greater or smaller in the control group compared to the low-intensity exercise group, as well as determining what the mean difference was (including 95% confidence intervals). If you contact us, we will explain how to interpret the Bonferroni post hoc test results.

Contact Us 24/7 for a Free Consultation

Call us now at (669) 900-1414!

Stay current on your favorite topics

Categories
Help

Three-Way RM ANOVA

The three-way repeated measures ANOVA is used to determine if there is a statistically significant interaction effect between three within-subjects factors on a continuous dependent variable (i.e., if a three-way interaction exists). As such, it extends the two-way repeated measures ANOVA, which is used to determine if such an interaction exists between just two within-subjects factors (i.e., rather than three within-subjects factors).

Note: It is quite common for “within-subjects factors” to be called “independent variables”, but we will continue to refer to them as “within-subjects factors” (or simply “factors”) in this guide. Furthermore, it is worth noting that the three-way repeated measures ANOVA is also referred to more generally as a “factorial repeated measures ANOVA” or more specifically as a “three-way within-subjects ANOVA”.

A three-way repeated measures ANOVA can be used in a number of situations. For example, you might be interested in the effect of two different types of ski goggle (i.e., blue-tinted or gold-tinted ski goggles) for improving ski performance (i.e., time to complete a ski run). However, you are concerned that the effect of the different lens colours on ski performance might be different depending on the snow condition (i.e., whether there has been recent snow fall or not), as well as whether it is overcast or sunny (i.e., current weather conditions). Indeed, you suspect that the effect of the type of lens colour on ski performance will depend on both the snow conditions and the current weather conditions. As such, you want to determine if a three-way interaction effect exists between lens colour, snow conditions and the current weather conditions (i.e., the three within-subjects factors) in explaining ski performance. A three-way repeated measures ANOVA can be used to examine whether such a three-way interaction exists.

Assumptions

In order to run a three-way repeated measures ANOVA, there are five assumptions that need to be considered. The first two relate to your choice of study design, whilst the other three reflect the nature of your data:

  • Assumption #1: You have one dependent variable that is measured at the continuous level (i.e., it is measured at the interval or ratio level). Examples of continuous variables include revision time (measured in hours), intelligence (measured using IQ score), exam performance (measured from 0 to 100), weight (measured in kg), and so forth.
  • Assumption #2: You have three within-subjects factors where each within-subjects factor consists of two or more categorical levels. These are two particularly important terms that you will need to understand in order to work through this guide; that is, a “within-subjects factor” and “levels”. Both terms are explained below: A factor is another name for an independent variable. However, we use the term “factor” instead of “independent variable” throughout this guide because in a repeated measures ANOVA, the independent variable is often referred to as the within-subjects factor. The “within-subjects” part simply means that the same cases (e.g., participants) are either: (a) measured on the same dependent variable at the same “time points”; or (b) measured on the same dependent variable whilst undergoing the same “conditions” (also known as “treatments”). For example, you might have measured 10 individuals’ 100m sprint times (the dependent variable) on five occasions (i.e., five time points) during the athletics season to determine whether their sprint performance improved. Alternately, you may have measured 20 individuals’ task performance (the dependent variable) when working in three different lighting conditions (e.g., red, blue and natural lighting) to determine whether task performance was affected by the colour lighting in the room. For now, all you need to know is that a within-subjects factor is another name for an independent variable in a three-way repeated measures ANOVA where the same cases (e.g., participants) are measured on the same dependent variable on two or more occasions.When referring to a within-subjects factor, we also talk about it having “levels”. More specifically, a within-subjects factor has “categorical” levels, which means that it is measured on a nominalordinal or discrete-time scale. Such ordinal or discrete-time variables in a three-way repeated measures ANOVA are typically two or more “time points” (e.g., two time points where the dependent variable is measured “pre-intervention” and “post-intervention”; three time points where the dependent variable is measured: “pre-intervention”, “post-intervention” and “6-month follow-up”; or four time points where the dependent variable is measured: at “10 secs”, “20 secs”, “30 secs” and “40 secs”). Such nominal variables in a two-way repeated measures ANOVA are typically two or more “conditions” (e.g., two conditions where the dependent variable is measured: a “control” and an “intervention”; three conditions where the dependent variable is measured: a “control”, “intervention A” and “intervention B”; or four conditions where the dependent variable is measured: in a room with “red lighting”, “blue lighting”, yellow lighting” and “natural lighting”). The number of time points or conditions are referred to as “levels” of the ordinal, nominal or discrete-time variable (e.g., three time points reflects three levels). Therefore, when we refer to a “level” of a within-subjects factor in the guide, we are only referring to “one” level (e.g., the room with “red lighting” or the room with “blue lighting”). However, when we refer to “levels” of a within-subjects factor, we are referring to “two or more” levels (e.g., “red and blue” lighting, or “red, blue and yellow” lighting).
  • Assumption #3: There should be no significant outliers in any cell of the design. Outliers are simply data points within your data that do not follow the usual pattern (e.g., in a study of 100 students’ IQ scores, where the mean score was 108 with only a small variation between students, one student had a score of 156, which is very unusual, and may even put her in the top 1% of IQ scores globally). The problem with outliers is that they can have a negative impact on the three-way repeated measures ANOVA by: (a) distorting the differences between cells of the design; and (b) causing problems when generalizing the results (of the sample) to the population. Due to the effect that outliers can have on your results, you have to choose whether you want to: (a) keep them in your data; (b) remove them; or (c) alter their value in some way.
  • Assumption #4: Your dependent variable should be approximately normally distributed for each cell of the design. The assumption of normality is necessary for statistical significance testing using a three-way repeated measures ANOVA. However, the three-way repeated measures ANOVA is considered somewhat “robust” to violations of normality. This means that some violation of this assumption can be tolerated and the test will still provide valid results. Therefore, you will often hear of this test only requiring approximately normally distributed data. Furthermore, as sample size increases, the distribution can be very non-normal and, thanks to the Central Limit Theorem, the three-way repeated measures ANOVA can still provide valid results. Also, it should be noted that if the distributions are all skewed in a similar manner (e.g., all moderately negatively skewed), this is not as troublesome when compared to the situation where you have combinations of levels of the three within-subjects factors that have differently-shaped distributions (e.g., not all combinations of levels of the three within-subjects factors are moderately negatively skewed). Therefore, in this example, you need to investigate whether strength scores are normally distributed for each cell of the design.

Note: Technically, it is the residuals (errors) that need to be normally distributed, but the observations can act in their place (i.e., as surrogates).

  • Assumption #5: The variance of the differences between groups should be equalThis assumption is referred to as the assumption of sphericity. It is sometimes described as the repeated measures equivalent of the homogeneity of variances and refers to the variances of the differences between the levels rather than the variances within each level. This assumption is necessary for statistical significance testing in the three-way repeated measures ANOVA. This assumption is very important and violation of sphericity can lead to invalid results.

    Interpreting Results

After running the three-way repeated measures ANOVA procedure and testing for the assumptions of the three-way repeated measures ANOVA, SPSS Statistics will have generated a number of tables and graphs that provide the starting point to interpret your results. We show you how to interpret these results and follow them up. We also show how to write up this output as you work through the section.

There are four steps you can follow to interpret the results for your three-way repeated measures ANOVA although whether you will need to follow all four steps (or just two or three steps) will depend on what your results show. First, you need to determine whether a statistically significant three-way interaction exists (STEP #1). This starts the process of interpreting your results.

  • STEP #1:
    Do you have a statistically significant three-way interaction?A three-way interaction is when one or more simple two-way interactions are different (at the level of a third factor) on the dependent variable. Which of your three factors make up the simple two-way interaction and which acts as the third factor will depend of your study design.For example, we have a three-way interaction between cardio, weights and time (cardio*weights*time), but are interested in the simple two-way interaction between the two factors (weights*time) at the different levels of the factor, cardio. In other words, is the effect of the interaction between weights and time on strength scores affected by whether cardiovascular training is present?If yes – you have a statistically significant three-way interaction – go to STEP 2A.
    If no – you do not have a statistically significant three-way interaction – go to STEP 2B.
  • STEP #2A:
    You have a statistically significant three-way interaction. Do you have any statistically significant simple two-way interactions?You know that there is a statistically significant difference in the simple two-way interactions for one or more levels of a third factor on the dependent variable. This informs you that you need to investigate each simple two-way interaction for statistical significance. For example, if there was a simple two-way interaction between factor 1 and factor 2 (factor 1*factor 2) at one or more levels of factor 3, which let’s say has two levels (level A and level B), simple two-way interactions will tell you whether the dependent variable differs based on this interaction (i.e., between factor 1 and factor 2) at just one level (e.g., level A or level B) of factor 3 or both levels of factor 3.For example, let’s say that we know there is a statistically significant difference in strength score based on the interaction between weight training and time (i.e., the two factors: weights and time), based on whether cardiovascular training is present (i.e., the third factor, cardio). Determining whether there are any statistically significant simple two-way interactions will tell us whether strength scores are affected by a weights*time effect at one or both of the levels of our third factor, cardio. However, you should note that it is possible to have a statistically significant three-way interaction, but not have any statistically significant simple two-way interactions.If no – you do not have any statistically significant simple two-way interactions – end analysis and write up.
    If yes – you have statistically significant simple two-way interactions – go to STEP 3A.
  • STEP #2B:
    You do not have a statistically significant three-way interaction. Do you have any statistically significant two-way interactions?A two-way interaction ‘ignores’ the influence of a third factor. Since you are running a three-way repeated measures ANOVA, meaning that you have a total of three factors, there are three possible two-way interactions (i.e., factor 1*factor 2, factor 1*factor 3 and factor 2*factor 3). Therefore, a two-way interaction is when there is a difference in the dependent variable based on an interaction between two factors.For example, since our three factors are weights, cardio and time, the three possible two-way interactions are: cardio*weights, cardio*time and weights*time. Therefore, if there was a two-way interaction between cardio*time on strength score, this would mean that strength score differs based on some combination of cardio (i.e., whether cardiovascular training is present) and time (i.e., when the strength score was taken during the intervention).If no – you do not have any statistically significant two-way interactions – end analysis and write up.
    If yes – you have statistically significant two-way interactions – go to STEP 3B
  • Step #3A
    You have any statistically significant simple two-way interactions. Do you have any statistically significant simple simple main effects?After you know which of the levels of factor 3 affect the scores on the dependent variable based on the interaction between factor 1 and factor 2 (factor 1*factor 2), simple simple main effects determine the effect of factor 1 on the dependent variable at each level of factor 2 and vice versa. For example, let’s imagine that there was a statistically significant simple two-way interaction between factor 1 and factor 2 on the dependent variable at level B of factor 3, but not at level A (e.g. assuming that factor 3 had two levels: A and B).You could choose to investigate: (a) the effects of factor 1 on the scores on the dependent variable at each level of factor 2; (b) the effects of factor 2 on the scores on the dependent variable at each level of factor 1; or (c) both. Let’s imagine that you were interested in (a): the effects of factor 1 on the scores on the dependent variable at each level of factor 2. Also, let’s imagine that factor 2 had three levels: A, B and C. Simple simple main effects would tell you if factor 1 led to different scores on the dependent variable depending on which of the three levels of factor 2 were present.For example, let’s imagine that there was a statistically significant simple two-way interaction between weight and time on strength score with no cardiovascular training, but not with cardiovascular training. We could now investigate: (a) the effects of time on strength score at each level of weight with no cardiovascular training; (b) the effects of weights on strength score at each level of time with no cardiovascular training; or (c) both. Let’s imagine that we were interested in (a): the effects of time on strength score at each level of weight with no cardiovascular training. Simple simple main effects would tell us if time leads to different strength scores depending on whether weight training was performed where no cardiovascular training was performed for either intervention.If no – you do not have any statistically significant simple simple main effects – end analysis and write up.
    If yes – you have statistically significant simple simple main effects – go to STEP 4A.
  • Step #3B
    You have statistically significant two-way interactions. Are there any statistically significant simple main effects?After you know which of the three possible two-way interactions are statistically significant, simple main effects determine whether the effect of one of the factors on the dependent variable differs based on the values of the other factor and vice versa (these two factors being the factors involved in the statistically significant interaction). However, if the effect of the factor in question has more than two groups/levels, you will not be able to determine which specific groups differ.For example, if strength scores differed based on a two-way interaction between weights and time (i.e., weights*time), simple main effects would tell us the effect of time on strength scores at each level of cardio and vice versa. However, simple main effects would not tell us where any differences lie, only that there was a difference in strength score over all levels of time, for example.If no – there are no statistically significant simple main effects – end analysis and write up.
    If yes – there are statistically significant simple main effects – go to STEP 4B.
  • Step #4A
    You have statistically significant simple simple main effects. Are there any statistically significant simple simple comparisons?Finally, after you know that there are simple simple main effects assuming a factor 1*factor 2 interaction at one or more levels of factor 3, and have one or more statistically significant simple simple main effects, you can now use simple simple comparisons to determine: (a) exactly where the differences were (e.g., factor 1 led to different scores on the dependent variable when level B of factor 2 was present, but not when level A was present); and (b) the direction and magnitude of the difference in the dependent variable (e.g., that the dependent variable was higher by X amount when level B of factor 2 was present compared with level A).Therefore, let’s imagine that the simple simple main effect of time was statistically significant for weight training only. We can now use simple simple comparisons to determine: (a) exactly where any differences were in strength score between all possible combinations of time (pre, mid and post) for weight training only; and (b) the direction and magnitude of any differences in strength score based on these different time points (e.g., strength score was 11.3 kg higher mid- weight training only trial compared to pre- weight training only trial).If no – there are no statistically significant simple simple comparisons – end analysis and write up.
    If yes – there are statistically significant simple simple comparisons – interpret findings and write up.
  • Step #4B
    You have statistically significant simple main effects. Are there any statistically significant pairwise comparisons?Finally, after you know that there are simple main effects, pairwise comparisons determine where these differences lie. For example, if the scores on the dependent variable differed based on some combination of factor 2 and factor 3 (i.e., factor 2*factor 3), pairwise comparisons would tell you: (a) exactly where the differences were (e.g., when the second level of factor 2 and first level of factor 3 were present compared to when the second level of factor 2 and third level of factor 3 were present); and (b) the direction and magnitude of the difference in the dependent variable (e.g., that the dependent variable was higher by X amount when the second level of factor 2 and first level of factor 3 were present compared to when the second level of factor 2 and third level of factor 3 were present).For example, if simple main effects showed us that strength scores differed based on some combination of the different levels of weights and time, pairwise comparisons could tell us that strength score was, for example, 19.2 kg higher between pre- and mid- time points when weight training was performed.If no – there are no statistically significant pairwise comparisons – end analysis and write up.
    If yes – there are statistically significant pairwise comparisons – interpret findings and write up.

Contact Us 24/7 for a Free Consultation

Call us now at (669) 900-1414!

Stay current on your favorite topics

Categories
Help

Two-Way RM ANOVA

The two-way repeated measures ANOVA is used to determine if there is a statistically significant interaction effect between two within-subjects factors on a continuous dependent variable (i.e., if a two-way interaction exists). It is an extension of the one-way repeated measures ANOVA, which only includes a single within-subjects factor.

Note: It is quite common for “within-subjects factors” to be called “independent variables”, but we will continue to refer to them as “within-subjects factors” (or simply “factors”) in this guide. Furthermore, it is worth noting that the two-way repeated measures ANOVA is also referred to as a “within-within-subjects ANOVA” or “two-way within-subjects ANOVA”.

A two-way repeated measures ANOVA can be used in a number of situations. For example, imagine you are interested in the effect of two different types of ski goggle (i.e., blue-tinted or gold-tinted ski goggles) on ski performance (i.e., time to complete a ski run). In particular, you are concerned that the effect of the different lens colours on ski performance might be different depending on whether it is overcast or sunny (i.e., under different weather conditions). You suspect that ski performance will depend on both ski goggle lens colour and the weather conditions. As such, you want to determine if a two-way interaction effect exists between ski goggle lens colour and weather conditions (i.e., the two within-subjects factors) in explaining ski performance. A two-way repeated measures ANOVA can be used to examine whether such a two-way interaction exists.

Assumptions

In order to run a two-way repeated measures ANOVA, there are five assumptions that need to be considered. The first two relate to your choice of study design, whilst the other three reflect the nature of your data:

  • Assumption #1: You have one dependent variable that is measured at the continuous level (i.e., it is measured at the interval or ratio level). Examples of continuous variables include revision time (measured in hours), intelligence (measured using IQ score), exam performance (measured from 0 to 100), weight (measured in kg), and so forth. You can learn more about continuous variables in our article: Types of Variable.
  • Assumption #2: You have two within-subjects factors where each within-subjects factor consists of two or more categorical levels. These are two particularly important terms that you will need to understand in order to work through this guide; that is, a “within-subjects factor” and “levels”. Both terms are explained below: A factor is another name for an independent variable. However, we use the term “factor” instead of “independent variable” throughout this guide because in a repeated measures ANOVA, the independent variable is often referred to as the within-subjects factor. The “within-subjects” part simply means that the same cases (e.g., participants) are either: (a) measured on the same dependent variable at the same “time points”; or (b) measured on the same dependent variable whilst undergoing the same “conditions” (also known as “treatments”). For example, you might have measured 10 individuals’ 100m sprint times (the dependent variable) on five occasions (i.e., five-time points) during the athletics season to determine whether their sprint performance improved. Alternately, you may have measured 20 individuals’ task performance (the dependent variable) when working in three different lighting conditions (e.g., red, blue, and natural lighting) to determine whether task performance was affected by the color lighting in the room. For now, all you need to know is that a within-subjects factor is another name for an independent variable in a two-way repeated measures ANOVA where the same cases (e.g., participants) are measured on the same dependent variable on two or more occasions. When referring to a within-subjects factor, we also talk about it having “levels”. More specifically, a within-subjects factor has “categorical” levels, which means that it is measured on a nominalordinal, or discrete-time scale. Such ordinal or discrete-time variables in a two-way repeated measures ANOVA are typically two or more “time points” (e.g., two-time points where the dependent variable is measured “pre-intervention” and “post-intervention”; three time points where the dependent variable is measured: “pre-intervention”, “post-intervention” and “6-month follow-up”; or four time points where the dependent variable is measured: at “10 secs”, “20 secs”, “30 secs” and “40 secs”). Such nominal variables in a two-way repeated measures ANOVA are typically two or more “conditions” (e.g., two conditions where the dependent variable is measured: a “control” and an “intervention”; three conditions where the dependent variable is measured: a “control”, “intervention A” and “intervention B”; or four conditions where the dependent variable is measured: in a room with “red lighting”, “blue lighting”, yellow lighting” and “natural lighting”). The number of time points or conditions are referred to as “levels” of the ordinal, nominal or discrete-time variable (e.g., three time points reflects three levels). Therefore, when we refer to a “level” of a within-subjects factor in the guide, we are only referring to “one” level (e.g., the room with “red lighting” or the room with “blue lighting”). However, when we refer to “levels” of a within-subjects factor, we are referring to “two or more” levels (e.g., “red and blue” lighting, or “red, blue and yellow” lighting).

Note: If you have three within-subjects factors rather than just two, you will need to run a three-way repeated measures ANOVA. The three-way repeated measures ANOVA is used to determine if there is an interaction effect between three within-subjects factors on a continuous dependent variable (i.e., if a three-way interaction exists). It is an extension of the two-way repeated measures ANOVA.

Interpreting Results

After running the two-way repeated measures ANOVA procedure, SPSS Statistics will have generated a number of tables and graphs that provide the starting point to interpret your results.

There are two steps you can follow to interpret the results for your two-way repeated measures ANOVA. First, you need to determine whether a statistically significant two-way interaction exists (STEP ONE). Next, if you have a statistically significant two-way interaction, you need to determine whether you have any statistically significant simple main effects (STEP TWO – OPTION A), but if you do not have a statistically significant two-way interaction, you need to determine whether you have any statistically significant main effects (STEP TWO – OPTION B). These two steps are explained below:

  • STEP ONE:
    Determine whether a statistically significant two-way interaction exists: The primary goal of running a two-way repeated measures ANOVA is to determine whether there is a statistically significant two-way interaction between the two within-subjects factors (i.e., a treatment*time interaction). We can gain an initial impression of whether we have an interaction between the two within-subjects factors by visually: (a) inspecting the profile plots that have been produced, and (b) consulting the descriptive statistics for the dependent variables based on the levels of the two within-subjects factors, which helps to verify any of the trends you identify in the profile plot. However, despite the usefulness of profile plots in understanding your data, you cannot determine an interaction effect from them because the profile plot is based on the sample data and we are interested in determining whether there is an interaction effect in the population (Fox, 2008). Therefore, a formal statistical test is required to test for the presence of an interaction effect (i.e., via statistical significance testing). Before you can find out the result of the two-way treatment*time interaction (i.e., whether the two-way interaction effect is statistically significant), you need to establish if the assumption of sphericity has been violated (specifically for the interaction term) using Mauchly’s test of sphericity. Accepting the assumption of sphericity indicates that the statistical result of the two-way interaction will not be biased (with regard to this particular assumption) and no adjustment to the test is needed. On the other hand, if the assumption of sphericity is violated, this means that the result is biased in that it too easily returns a statistically significant result. However, a correction can be made to correct for this bias. SPSS Statistics will produce four different test results for the two-way repeated measures ANOVA. The first result is for when the assumption of sphericity is met and the other three results are for when the assumption is violated.
  • STEP TWO – OPTION A:
    If you have a statistically significant two-way interaction, determine whether you have any statistically significant simple main effects: When you have a statistically significant two-way interaction, reporting the main effects can be misleading and you will want to determine the difference between trials at each level of time and vice versa, called simple main effects. Unfortunately, it is not possible to run simple main effects using syntax in SPSS Statistics, but it is possible to analyze the data using the GLM: Repeated Measures procedure on different subsets of the variables int_1 through con_3. Essentially, for a two-way repeated measures ANOVA, running simple effects is the same as running separate one-way repeated measures ANOVAs.As with any two-way interaction, you can investigate the effects of within-subjects factor A at every level of within-subjects factor B and/or the effects of within-subjects factor B at every level of within-subjects factor A. In this example, you could, therefore, investigate the effect of treatment on CRP concentration at every time point (i.e., at every level of time) or investigate the effect of time for the control trial and the exercise intervention trial (i.e., at every level of treatment). Often, you would choose one or the other based on theoretical reasons. For example, you might consider one of the within-subjects factors to be a moderating variable. In this example, we will consider both options.
  • STEP TWO – OPTION B:
    If you do not have a statistically significant two-way interaction, determine whether you have any statistically significant main effects: If you do not have a statistically significant two-way interaction, you need to interpret the main effects for the two within-subjects factors (i.e., the main effect for treatment and the main effect for time). If a main effect is statistically significant you can follow up that main effect with pairwise comparisons; that is, with a post hoc test. This will inform you where the differences in CRP concentration between time points lie.

Contact Us 24/7 for a Free Consultation

Call Us Now at (669) 900-1414!

Stay current on your favorite topics

Categories
Help

Kaplan-Meier Analysis

The Kaplan-Meier method (Kaplan & Meier, 1958) (also known as the “product-limit method”) is a nonparametric method used to estimate the probability of survival past given time points (i.e., it calculates a survival distribution). Furthermore, the survival distributions of two or more groups of a between-subjects factor can be compared for equality.

For example, in a study on the effect of drug dose on cancer survival in rats, you could use the Kaplan-Meier method to understand the survival distribution (based on time until death) for rats receiving one of four different drug doses (i.e., “40 mg/m2/d”, “80 mg/m2/d”, “120 mg/m2/d” and “160 mg/m2/d”) (i.e., the survival time variable would be “time to death” and the between-subjects factor would be “drug dose”) and then compare the survival distributions (experiences) between the four doses to determine if they are equal. Alternately, you could use the Kaplan-Meier method to determine whether the (distribution of) time to failure of a knee replacement differs based on exercise impact amongst young patients (i.e., the survival time would be “time to knee replacement failure” and the between-subjects factor would be “exercise impact”, which has three groups: “sedentary”, “low impact” and “high impact”).

Assumptions

In order to run a Kaplan-Meier analysis, there are six assumptions that must be met. These are:

  • Assumption 1: The event status should consist of two mutually exclusive and collectively exhaustive states: “censored” or “event” (where the “event” can also be referred to as “failure”). The event status is mutually exclusive because the outcome for a case can either be censored or the event has occurred. It cannot be both. For example, imagine that we were interested in the survival times of people suffering from skin cancer, where the event is (sadly), “death”. If the length of the experiment was 5 years, at the end of the 5 year period, all participants would either be “censored” or “dead”. Therefore, the two states should not only be mutually exclusive, but also collectively exhaustive (i.e., at least one of these states – censored or event – must occur).
  • Assumption 2: The time to an event or censorship (known as the “survival time”) should be clearly defined and precisely measured. The Kaplan-Meier method, unlike some other approaches to survival analysis (e.g., the actuarial approach), requires the survival time to be recorded precisely (i.e., exactly when the event or censorship occurred) rather than simply recording whether the event occurred within some predefined interval (e.g., only recording when a death or censorship occurred sometime within a 1, 2, 3, 4 and 5 year follow-up). In addition, the survival time should be clearly defined, whether this is measured in days, weeks, months, years, or some of time-based measurement.
  • Assumption 3: Where possible, left-censoring should be minimized or avoided. Left-censoring occurs when the starting point of an experiment is not easily identifiable. For example, imagine that we were interested in the survival times of people suffering from skin cancer. The “ideal” starting point would be to measure the survival time from the very moment that the participant developed skin cancer. However, it is more likely that the first time the participant knew they had cancer was the moment it was diagnosed, such that the “diagnosis” acts as the starting point for the experiment. Even if we isolated our sample to a “Stage 1” cancer diagnosis, there will still be differences between participants. For example, some participants may have had a suspicious mole that they did not get checked for some time, whilst other participants may have regular check-ups such that a diagnosis was made much earlier. Therefore, the time between the participant developing skin cancer and the diagnosis is unknown and is not included in the Kaplan-Meier analysis. The result is that this data – known as left-censored data – does not reflect the observed survival time. Instead, the survival time recorded will be less than (or equal to) the observed survival time. As such, the goal is to avoid left-censoring as much as possible.
  • Assumption 4: There should be independence of censoring and the event. This means that the reason why cases are censored does not relate to the event. For example, imagine that we were again interested in the survival times of people suffering from cancer, where the event is “death”. For the assumption of independent censoring to be met, we need to be confident that when we record that a participant is “censored”, this is not because they were at greater risk of the event occurring (i.e., death being the “event” in this case). Instead, there may be many other reasons why a participant is “legitimately censored”, including: (a) natural dropout or withdrawal (e.g., perhaps because the participant does not want to take part in the experiment any more or moves from the area); and (b) the event not occurring by the end of the experiment (e.g., if the follow-up period for the experiment is 5-years, any participant still alive at this point will be recorded as “censored”). Independent censoring is important because the Kaplan-Meier method is based on observed data (i.e., observed events) and assumes that censored data behaves in the same way as uncensored data (after the censoring). However, if the censored data does relate to the event (e.g., a participant that was recorded as being censored died due to the cancer or perhaps even something related to the cancer), this introduces serious bias to the results (e.g., over-estimating 5-year survival rates from skin cancer amongst participants).
  • Assumption 5: There should be no secular trends (also known as secular changes). A characteristic of many studies that involve survival analysis is that: (a) there is often a long time period between the start and end of the experiment; and (b) not all cases (e.g., participants) tend to start the experiment at the same time. For example, the starting point in our hypothetical experiment was when participants were “diagnosed” with skin cancer. However, imagine that we wanted a sample of 500 participants in our experiment. It may take a number of months to recruit all of these participants, who each would have different starting points (i.e., the dates when they were diagnosed), but we would “pool” the starting and subsequent times (e.g., everybody’s first diagnosis would be time point 0). However, if over this period of time, factors have changed that affect the likelihood of the event then we may introduce bias. For example, death rates for skin cancer may have gone down following the introduction of new drugs, improving survival rates amongst participants joining the experiment later on (i.e., increasing right-censoring). Alternately, the introduction of a national skin screening programme may have led to faster diagnoses, increasing the appearance of better survival rates (i.e., reducing left-censoring). These factors (e.g., new drugs or better screening) are examples of secular trends that can bias the results.
  • Assumption 6: There should be a similar amount and pattern of censorship per group. One of the assumptions of the Kaplan-Meier method and the statistical tests for differences between group survival distributions (e.g., the log rank test, which we discuss much later in the guide) is that censoring is similar in all groups tested. This includes a similar “amount” of censorship per group and similar “patterns” of censorship per group. Failure to meet the assumption can lead to incorrect conclusions as will be discussed later (Bland & Altman, 2004; Hosmer et al., 2008; Norušis, 2012).

Interpreting Results

After running the Kaplan-Meier test procedures in the previous section, SPSS Statistics will have generated a number of tables and graphs that contain all the information you need to report the results of your Kaplan-Meier analysis. We show you how to interpret these results. We also show how to write up this output as you work through the section.

There are two stages to interpreting the results from a Kaplan-Meier analysis: (a) determining whether there are statistically significant differences between the survival distributions; and (b) if there are statistically significant differences between the survival functions, carrying out pairwise comparisons to determine where such differences are. To recap:

  • First, you need to determine whether there are statistically significant differences between the survival functions: Before doing this, it is useful to interpret the plot of the (cumulative) survival functions for the groups of your between-subjects factor (e.g., the three groups of our between-subjects factor, intervention, which were: the “hypnotherapy programme”, “nicotine patch” and “e-cigarette”). To build on this plot and get another ‘feel’ for the results, it is a good idea to view the descriptive statistics that are produced, which illustrate how survival times vary between the groups. You can use the SPSS Statistics output from the three statistical tests that can be run to determine whether the survival distributions are equal (i.e., the log rank testBreslow test and Tarone-Ware test). Ultimately, the results from these tests will determine whether there are any statistically significant differences in survival distribution between the groups of your between-subjects factor.
  • Second, if you have statistically significant differences between the survival functions, you can carry out pairwise comparisons: If you already ran the pairwise comparison procedure, you can go straight to interpreting the SPSS Statistics output from this. We will show you how to interpret the pairwise comparisons for the log rank test. This will tell you which of the groups of your between-subjects factor differed from each other (e.g., whether there was a difference in the survival distribution for those participants that underwent the hypnotherapy programme compared to those using nicotine patches).

Contact Us 24/7 for a Free Consultation

Call Us Now at (669) 900-1414!

Stay current on your favorite topics