I've seen a few studies like this. I've also seen some that give the other result. The ones where I've looked up the details (which I haven't for this) have two serious statistical problems. Both are based on the fact that you will see patterns even in random data. To avoid this, mathematical tests are done to see whether the result is larger than what you would expect by chance. But even so, you can get even large results by chance. So you have to pick a "significance level." E.g. you say "this is significant at the 0.05 level." That means that you'd get an effect that large even in random data, 5% of the time. A lot of social science data uses the 0.05 level.
So here are the problems:
1) If people do these kinds of tests a lot, 5% of the time they're going to get positive results even if there is no effect. But they're only going to publish the ones that work, because journals are normally not interested in tests that failed. Unless you know how many people tried it and failed, you have no way to know what the actual significance level is.
2) Many of the experiments check many different variables. You'll often see studies saying things like "while survival wasn't improved by prayer, the level of pain control was better." It's pretty clear that they asked a number of different questions. How many? If they asked 20 questions, you'd expect one of them to come out positive.
Controlling for these problems is really difficult. Most social science work doesn't do it well, and in this case I'm not sure it even could be done well. It turns out that even medical research has problems like this. Retrospective studies have suggested that a lot of things we think we know about social science and the effects of drugs are wrong. I've started seeing articles indicating concerns about this, but as far as I know there aren't effective approaches in place to prevent the problems.
I'm a bit concerned that even courses about statistics don't get these points across effectively enough to students. While there are strong enough pressures to publish that some people will publish meaningless results knowingly, I'm concerned that a lot of scientists using statistical methods don't understand their limitations.
Yes, I used to teach statistics. I have no idea whether my students came away from the course understanding these issues.