The most appropriate analysis technique for the aforementioned experiment, where you have three or more experimental groups, and different participants in each group, is the ‘one-way independent analysis of variance’ (ANOVA, for short).

The ‘one way’ indicates that one experimental variable is being manipulated. The ‘independent’ indicates that there is no cross-over between groups.

Across a selection of samples, the mean for each (specified) result will have a confidence interval within which 95% of the sample means will fall, and tells us how well the mean represents the population.

Like the t-test (–), one of the assumptions of ANOVA is that the variances between experimental conditions are similar (homogeneity of variance). The extent to which this is true can be verified by Levene’s Test (here is the link again to the formula–). Here you are testing the null hypothesis that the variances of the groups are the same.

What you are seeking is the experimental effect, the difference between systematic and unsystematic variance. When you find these two variables you can calculate the ‘F ratio’, the probability of finding these two values in such a relationship.

The F ratio is the variance of sample means (MSB mean squared between in shorthand) over the mean of sample variances (MSE, or mean squared error, in shorthand). If there is a significant difference in population means – contrary to our assumption of the homogeneity of variances – then the MSB figure will be larger than the MSE.

**Rationale**

In a population where all the sample means are roughly equal, these two calculations – the total variance between the mean for each sample condition, and the mean of the variances across the sample – would yield roughly equal values.

If they are not, this tells us there could be an experimental effect responsible for the difference. The significance of this effect is calculated by finding the probability of such a ratio between these values occurring, across a population of that size. Graphing the probability density of variances across the sample is an easy way of determining if the F-ratio falls outside the 95% of ‘natural’ variance and within the magic 0.05 bracket.

You should also be aware of the derivation of the MSB calculation. It is a proxy for the sampling distribution of the mean.

We can say that the sampling distribution of the mean is equal to the variance divided by n, the number of conditions. If we knew the sampling distribution of the mean, we could compute the variance by multiplying the variance of the sampling distribution of the mean by n.

… We can estimate this with the variance of the sample means.

The total sum of squares, which you might hear referred to, is the total difference from the mean of all subjects in all conditions, squared. As long as there are an equal number of subjects in each condition, this overarching ‘grand mean’ is the mean of the condition means.

## Leave a Reply