A statistically significant result is defined as an outcome that is unlikely to have occurred by chance. Generally, if the significance level is .05 or below, making it a less than 5% chance that the results were due to an external factor, then it is taken to be significant. However, by saying something is statistically significant, doesn’t necessarily mean that the results are significant in the real world. Statistically significant basically means that what we have found is not nothing – there is definitely something happening as the results are not zero. But this doesn’t mean that the results show something important. This is where scientific findings can be misinterpreted. In day to day life, we take significance to mean the finding of something important, whereas in science it just means that they have found something. It has also been found that in every 20 studies that find a statistically significant result, at least one of these is flawed in some way. This means that even if statistical significance did always mean that it was significant in the real world; these results are not always reliable; again meaning that it is not sufficient to assume that all statistically significant findings mean that there is definitely an effect.

# Does Statistically Significant Mean There Is Definitely An Effect?

Advertisements

You have raised some interesting points in this blog, particularly where you mentioned “by saying something is statistically significant, doesn’t necessarily mean that the results are significant in the real world”. It is important to consider that researchers will often make a distinction between statistical significance and practical significance; practical significance means that the treatment effect is substantial and large enough to have practical application, and a statistically significant result, whether large or small, means that the observed effect is very unlikely to have occurred by chance (Gravetter & Forzano, 2009). The presence of a ‘statistically significant’ effect does not necessarily mean the results are large enough for practical applications. A treatment may cause a statistically significant change but not a clinically significant change (Gravetter & Wallnau, 2009).

I highlighted a piece of research in my blog by Krueger et al. (2000), Krueger reported facing difficulties associated with Type II errors. Krueger was studying treatments for Psoriasis treatments, believing that the treatments should be approved when they have been shown to produce a statistically significant level of improvement. In part this is because ” they found that setting an ‘arbitrarily high criterion of clinical efficacy’ for new psoriasis treatments could possibly limit the development of approval of useful treatments”.

I found this piece quite contradictory to both of our viewpoints on statistical significance. Where we were suggesting that statistical significance does not imply practical significance, Krueger (2000) argued that clinical/practical significance was limiting the development of useful treatments, arguing in favour of statistical significance.

[…] https://psuc2c.wordpress.com/2012/02/05/does-statistically-significant-mean-there-is-definitely-an-ef… […]

Statistical significance can often be insignificant in real life. The findings of a research project are a rather encapsulated phenomenon. Take for example the area of scientific research most commonly related to and aimed at affecting real life: medicine and treatment – both physical and psychological. Drug companies often laud the effectiveness by demonstrating its effectiveness compared to a control such as a placebo group and so the public tends to seize upon that success. However, particularly with drug trials, the real value is assessed through its comparison to established ,methods of treatment. Sure, your ‘miracle drug’ may have been 10 times more effective than the placebo, but if the current leader is 9.9 times more effective than a placebo, you really haven’t advanced the research….sorry Mr Multi-Billion-Pharmeceutical-Magnate, the truth hurts sometimes.

The same issue is found in psychological research, perhaps most easily illustrated by the concept of ecological validity. You might have demonstrated that a behaviour is significantly exhibited in the controlled artificial environment of the lab but if this doesn’t hold up in everyday situations, then the findings really aren’t of much use!

Hi,

This is an interesting question, and we do have various tools available to us that can help in obtaining an answer. Measures of effect size exist for precisely this reason, and using them really increases our understanding of the data we’re looking at; an ANOVA simply says, as you have stated, that there is a difference between two factors. A partial eta squared, on the other hand, can tell us how BIG such a difference is. If you have an effect that is both significant and large, there is a much higher chance you have found something meaningful.

Peer review, replication and variation of research studies all help to eliminate issues with validity and reliability of measures, and if all studies reach a similar conclusion then findings are further reinforced.

Sam

😀 good blog!

It is an interesting question, and I agree with you.

That although the result is significant does not necessary mean it is really significant as it stated at the findings section and in real life, randomness and errors might have involved(to make it = significant). And I think one of the major problem of the studies(the SONA) is that example is too small, although we are limited to the sampling method that we only/mainly use psychology students as our participants, the sample size is far too small, which it could be only n = 10-20, so significant result really doesn’t mean it is significant since sample size is too small. (But of course we can always use different analysis method(test) to make it “better”)

Thompson wonders whether we should reject a study we’ve conducted if we can only find it to be significant to p < .06. When you think about it, it seems silly that we would consider just throwing away data based on an arbitrary standard alpha level, especially if there's a moderate effect size. McClean and Ernest also suggest that we underrate effect size and that in practical terms what we really want is a treatment that works well, not a treatment that works a little bit, but reliably to a significant degree. Nakagawa goes on to say that by being so careful about Type I errors and relying heavily on statistical significance, we have created a far greater amount of Type II errors. Maybe we should all be more flexible with our stats.

http://www.personal.psu.edu/users/d/m/dmr/sigtest/4mspdf.pdf

http://beheco.oxfordjournals.org/content/15/6/1044.full

[…] https://psuc2c.wordpress.com/2012/02/05/does-statistically-significant-mean-there-is-definitely-an-ef… […]

[…] https://psuc2c.wordpress.com/2012/02/05/does-statistically-significant-mean-there-is-definitely-an-ef… […]

All large purchases are always intimidating, especially

if you are uninformed about the industry. One of the scariest purchases is buying cars.

Many people fear they are getting ripped off and you surely

don’t want that. Avoid buying a lemon by looking through these great tips and tricks regarding car purchases.