Point Of Random Assignment Of Treatments

When attempting to create a relationship between two variables, it is best to discover which of the variables affects the other. If by the end of an experiment you have discovered which is the dependent variable and which is the independent variable, you will have created a much more valid study than one which simply finds a connection, as you can then start investigating to what extent the IV affects the DV.

Yet what if you have indeed found a connection, yet the methods you used to imply that connection ended being the reason it occured? In 2006, Fillmore et al. conducted a meta-analysis of 54 studies looking at moderate alcohol use and if it had an effect on a person’s health. The studies all seemed to indicate that a moderate use of alcohol could give a person a healthier heart, yet Fillmore found that many of the studies (47 of them) hadn’t randomly divided the participants into groups of drinkers and non-drinkers. Instead, it was a comparison between people who drank regularly and people who couldn’t drink because they were either a) old or b) dying. Now we know why the drinkers had healthier hearts. Not because of drinking, but because they were not ill or too old. So by not using the correct method, the studies found a connection that was in fact not there. Random assignment would have shown that this connection did not exist, any other assignment could have left the bias intact.

Random assignment ensures that participants in a cause and effect study are unbiased as it prevents people’s history from causing an extraneous variable within the experiment. Only for ethical reasons should it be changed; many of the studies could not have used this as it means they would have had to convince non-drinkers to drink. Many of the teetotallers had their own reasons for not drinking alcohol, meaning that the scientists would have had to either force them to drink  (highly unethical) or drop them from the study, leaving them with just drinkers who they would have had to convince not to drink, this dictation of a way of life could again be highly unethical. So we can see just how difficult it is to use random assignment in some cases, yet in others experiments wherein the participants past cannot make a large impact, I consider it to be the best assignment type available.




Interesting alcohol related fact: A brewery tank ruptured in a London Parish in 1814, releasing 3,500 barrels worth of beer, destroying two houses and killing nine people.

Like this:



Posted in Year One Research Methods | 5 Comments

My past several posts have detailed confounding variables, a problem you might encounter in research or quality improvement projects.

To recap, confounding variables are correlated predictors. Leaving a confounding variable out of a statistical model can make an included predictor look falsely insignificant or falsely significant. In other words, they can totally flip your statistical analysis results on its head!

To find lurking confounding variables, you must take the time to understand your data and the important variables that may influence a process. Background research and solid subject-area knowledge can help you navigate data difficulties. You should also measure and include everything that you think is important.

Of course, understanding and measuring everything of importance may not be possible due to time and cost constraints. Indeed, all of the relevant variables may not be known or even measurable. What to do?

There is a simple solution to this complex problem. You can wave the white flag and admit that you don’t know everything, or at least that you can’t measure everything that affects your response. You randomize!

Randomness plays several important roles in the design of experiments. In this case, we’re talking about random assignment, which is different than random selection.

  • Random selection is how you draw the sample for your study. This allows you to make unbiased inferences about the population based on your sample.
  • Random assignment is how you assign the sample to the control and treatment groups in your experiment. This allows you to make causal conclusions about the effect of one variable on another variable.

Random assignment might involve flipping a coin, drawing names out of a hat, or using random numbers. All subjects should have the same probability of being assigned to any group. This process helps assure that the groups are similar to each other when treatment begins. Therefore, any post-study differences between groups shouldn’t be due to prior differences.

Let’s work through an example and see how it combats confounding variables. Take the biomechanics study where we wanted to see if the jumping exercise (treatment group) produced greater bone density than the group that didn’t jump (control group). Further, let’s assume that greater physical activity is correlated with increased bone density but we didn’t measure it. We’ll compare 2 scenarios.

Scenario 1: We don’t use random assignment and, unbeknownst to us, the more physically active subjects end up in the treatment group. The treatment group starts out more active than the control group. Because activity increases bone density, the higher activity in the treatment group may account for the greater bone density compared to the less active control group. Because it is not in the model, activity is a confounding variable that makes the jumping exercise appear to be significant when it might not be.

Scenario 2: We use random assignment so the treatment and control groups start out with roughly equal levels of physical activity. Activity still affects bone density but it is equally spread across the groups. Indeed, the groups are roughly equal in all ways except for the jumping exercise in the treatment group. If the treatment group has a significantly higher bone density, it’s almost certainly due to the jumping exercise.

For both scenarios, the data and statistical results could be identical. However, the results for the second scenario are more valid thanks to the methodology.

Random assignment helps protect you from the perils of confounding variables and competing explanations. However, you can’t always implement random assignment. For the bone density study, we did randomly assign the subjects to the treatment or control group. However, when I used the data from that study to look for patterns amongst the subjects who developed knee pain, I couldn’t randomly assign them to higher and lower calcium intake groups! This highlights one of the pitfalls of ad hoc data analysis.
We’ve detailed the negative aspects of confounding variables here and in my last several posts. However, confounding variables have a potential upside. They don’t sound quite so threatening when you think of them as proxy variables, which we’ll cover in my next post.


One thought on “Point Of Random Assignment Of Treatments

Leave a Reply

Your email address will not be published. Required fields are marked *