We will put forward two primary approaches to interaction analysis. They require the following independent variables to be included in the analysis:
| Approach A: Interaction effect term | Approach B: Comparison of model fit | |
| Independent variables included in the model | x (main effect term) | x (main effect term) |
| z (main effect term) | z (main effect term) | |
| x*z The product of x and z (interaction effect term) | x*z The product of x and z (interaction effect term) or All possible combinations of x and z includes as dummies |
| Note This chapter focuses solely on two-way interactions (i.e. interactions between two variables). While it is possible analyse interactions between more than two variables, the interpretation is usually not very straight-forward. |
Approach A: Interaction effect term
By including the two main effects (x and z) as well as the interaction effect term in the same model, we can see if the interaction has any effect that goes beyond the main effects. In other words, is the interaction term statistically significant (p<0.05)? We also get information about in which direction the interaction effect goes, i.e. what it means, although this effect is not always easy to interpret.
Approach B: Comparison of model fit
This approach is more flexible than the previous one since it is based on comparison of model fit: does a model that includes the main effects as well as a) the interaction effect term or b) all possible combinations of x and y included as dummies, fit the data significantly better than a model that just includes the main effects? This can be formally tested by a likelihood ratio test. If the test produces a p-value that is below 0.05, it suggests that the model with the interaction fits the data better. Alternatively, or as a complement, one can compare the Akaike’s Information Criterion (AIC), and the Bayesian Information Criterion (BIC) between the models. The model with the lowest values has the better fit.