Statistical quote of the day

spunky

King of all Drama
#21
And let's face it, how many peer reviews are garbage, either someone skimming an article, or someone who had their GA/TA do the review?
WOHA! what do YOU have against reviewrs who subcontract their gradstudents to review articles???

some of us have to pay the rent somehow... :p
 

TheEcologist

Global Moderator
#22
Good point. Peer reviewers seem to wield a lot of power with not a lot of accountability. I wonder sometimes if the peer reviewers' names should be published along with articles and/or have their referees' reports attached in online supplements.
I don't completely agree, it is also the Editor's (or Associate Editor's) task to judge the quality of reviews. Reviewers can actually get blacklisted if they consistently give bad reviews. Secondly the systems we have "is the worst system possible system except all others". There are very good reasons why reviews - in some cases - have to be blind. Many people would be far less critical if they knew their names will be known to the authors. Harsh but just reviews on the personal work of someone can send that someone up the tree and cause life long enemies (which has happend in cases where reviewer names leaked out). In reviewing, I would go even further and opt for double blind as I find that when I review papers of especially someone very important in the field - I tend to more easily accept assumptions "just because its a big name".. something I should not do.

Thus, I personally believe that the Editor should always make the final decision, on both the quality of the paper as that of the reviews. Alas, in practice reviewers are in short supply and there are many many leaches (everybody knows someone who never reviews papers but submits allot right?)

The fact is that shitty papers will get through, it's a synergistic system that relies on the good will (and free time) of people that often don't have much free time anyway. Sure you should always check the analyses of the other person.. which is why I think data should always be provided for reviewers. And all journals should only allow Latex-Sweave type documents (now very few journals even support Latex). I ask myself though can you really detect expertly forged data?

No, likely the only way to do this is to let the process of Science run its course - if your results cannot be reproduced your theories will die. And here lies the heart of the modern problem in my opinion: Science needs people to reproduce results before it can progress. However, high and mid-tier journals only want the most novel results possible. So who is going to reproduce the forged study experiments and find out that something is fishy? Its seen as a waste of time because whoever wants job security must have a couple of high ranking journal publications. This is an increasing problem in my opinion (directly caused by efforts to measure a scientist's productive output as if he/she was a factory worker).

So my recommendation for review is: double blind with Latex-Sweave documents in which you not only review the novelty and logic but also the analysis and data. My recommendation in general is: reproduced research is not second to novelty!!! Hmmm maybe citation indices should be changed to "reproducible research indices"?
 

trinker

ggplot2orBust
#23
TheEcologist you have some very good points (and have altered my outlook on some points), especially the double blind point of reviewing big names.

I agree with letting science run its course… almost. However, in some fields such as education or medical the media gets a hold of a paper being released and without being subjected to replication a theory is held as gospel in the public's eyes. Wakefield's autism and immunization link paper is a good example of this. Kids died because parents believed the media. That's not cool. Maybe there's a great deal of responsibility that needs to be placed on Editors to verify in greater detail new theory that may be used to the detriment of the population. The media isn't good at actually tearing apart a study. The read some results and sensationalize it. This also happens a great deal in my field of education. We see policy makers jumping from one theory to another without actually scrutinizing the studies they're basing the policy on. This plays to my earlier point that we're under a great deal of stress to be published. We may overstate claims in order to be fresh. There's a great deal of ethics involved with research but this really hasn't been discussed much in my program (maybe it is in others).

I like your idea of the double blind and a review of the data. I love the concept of reproducible research (this point needs to be particularly taken in my field (literacy) that's dominated by qualitative studies). Often a point is made about on classroom or once student based on one transcript at one point in time. Technically you can't generalize these findings to a population but people read into a finding as, "this is applicable to everyone". I'm really pushing hard for more statistical analysis in my field (I don't think statistics tell the whole story but they provide more evidence and a different perspective on the problems of the field). I'm really pushing for use of sentiment analysis in my own field as a means of discourse analysis (all right I'm off on a tangent so back on track...).

I'm not sure if I agree with use of latex-sweave (don't get me wrong I love sweave) but this isn't applicable to qualitative people who may use statistical analysis. I mention latex to these people they look at me like I have three heads. I use latex even when I'm writing a qualitative paper because I like the product produced (fine control of text and graphics, quality of graphics, easy document structure changes). That being said I'm an oddity. I think that the sweave+R reproducible research proposition suffers from the same problem my suggestion of revealing reviewers names suffers from (as you've pointed out); there are unintended consequences to this proposition. This would force people to use R and latex (again big fans of both) but I'm all about intellectual freedom and that makes R king. I think R gets better because it competes with SAS SPSS etc. If you force people, in an essence, to use R you run the risk of taking away the freedom of choice. This is usually bad for innovation.

I would like that you have to append your data set (be it qualitative or quantitative) as well. This idea holds people more accountable. Right now you only have to state your methods and describe your data. I think researchers would be more careful if their data sets could be used to scrutinize their results. Again this runs into problems in that people could steal your data set (and in my field you may publish of a qualitative data set for 10 years). This is your bread and butter. Maybe if only the data were provided to the reviewers as you suggested and not opening it to the public at large.

Spunky I understand you may have a vested interest in non PhDs reviweing articles but I really don't believe the majority of non PhDs are capable of really reviewing an article and providing the feedback necessary for a proper review of someone's work. Even new PhDs may lack the depth of knowledge to review an article.

This is a pretty interesting conversation and my ideas/opinions are not formed fully (TheEcologist changed my view already). Feel free to retort and critique as TheEcologist did, it extends the thinking (Vygotsky and Bakhtin would be proud).

EDIT: This all ties in back to your original Coase quote. It'll definitely confess if you torture fake data. Coase's quote is funny but dead serious all at the same time.
 
Last edited:

noetsi

Fortran must die
#24
That is quite funny as those same 'neutral models' caused quite a stir-up in Ecology as well (some see the theory as a milestone in the evolution of ecology into a quantitative science).

btw what is a donnybrook?
A donnybrook means a huge fight or controversy.
 

noetsi

Fortran must die
#26
Peer review is generally designed to raise theoretical or methods issues. Not to capture fabrication of data or plagarism....
 

CowboyBear

Super Moderator
#27
Oh wow, I've missed some interesting posts on this thread.

TheEcologist you have some very good points (and have altered my outlook on some points), especially the double blind point of reviewing big names.]
Ditto. My reason for suggesting that peer reviewers names' should perhaps be published was to improve the quality (i.e. decrease the laziness) of some reviews. I find lazy reviews quite irritating (e.g. one vague and incomprehensible paragraph). Negative but well-justified reviews are easier to live with: They teach you something. However, as TheEcologist points out, printing reviewers' names could have the unintended consequence of stopping reviewers from publishing overly negative reviews, so perhaps wouldn't be such a good idea. But, perhaps journals could still publish peer reviewers' reports anonymously in online supplements, so as to demonstrate the comprehensiveness and quality of the reviews...?

Spunky I understand you may have a vested interest in non PhDs reviweing articles but I really don't believe the majority of non PhDs are capable of really reviewing an article and providing the feedback necessary for a proper review of someone's work. Even new PhDs may lack the depth of knowledge to review an article.
This article actually found that the quality of peer reviewers' reports seems to decrease over time. I don't imagine that many of the peer reviewers included would've have been pre-PhD, but I think the bigger point is that the quality of peer reviewer doesn't necessarily increase with experience (or not linearly anyway!)

No, likely the only way to do this is to let the process of Science run its course - if your results cannot be reproduced your theories will die. And here lies the heart of the modern problem in my opinion: Science needs people to reproduce results before it can progress. However, high and mid-tier journals only want the most novel results possible. So who is going to reproduce the forged study experiments and find out that something is fishy? Its seen as a waste of time because whoever wants job security must have a couple of high ranking journal publications. This is an increasing problem in my opinion (directly caused by efforts to measure a scientist's productive output as if he/she was a factory worker).
I wonder sometimes whether we need journals with titles like "Journal of Replication in Psychology" (I've just googled this to check it doesn't already exist!) I actually don't think this would necessarily be a bad avenue to take for commercial publishers, as well as being good for science: I'm not sure that the pragmatic/commercial bias against replications is that well justified either. Basically, publishers and authors at the moment tend to have explicit or implicit policies against replications, probably because for any given research finding, the original finding is likely to get much more attention than the replication. More citations of an article are better for the journal and better for the author too...

But the thing is... replications, I would think, are surely much more likely to be directed at new, interesting, and important findings (agreed?) So while we'd expect the replications of these findings to get less attention than the originals, they may still get more citations than the average original article (based on piggybacking the popularity of the particular original studies they are replicating). Publishers and authors might even be able to predict with reasonable accuracy the number of citations a given replication is likely to get based on how many citations the original got.

All of this is just speculation, but it'd be possible to check whether these arguments actually have any validity... anyone up for writing an article on the business case for replications? ;)
 

noetsi

Fortran must die
#28
Academic bias, or journal bias, against replicated research is well known and often commented on. A journal wants something new and it will rarely publish something that replicates past research unless you do a signficant twist on it. I lost track of the number of professors who pointed this out.

The reason, I suspect, that peer reviews get worse over the course of a career is that a lot of professors lose interest after they have tenure... (and the number of proffesors who make that point is pretty large as well).:)
 
#29
A pure replication in its current form is boring and takes too much space dedicated for novel research. I think such papers should be as short as possible fitting into a couple of pages, but should be appreciated as original studies. This way researchers would get encouraged to perform such studies because they would get rid of writing exhaustive manuscripts which are much more difficult than the experiments! and the journals would happily accommodate them, as they present valuable information, and also the readers would be happy to see the results of a previous study have been confirmed or denied, and if denied what can be the reason, without reading repeated content.
 

CowboyBear

Super Moderator
#30
A pure replication in its current form is boring and takes too much space dedicated for novel research. I think such papers should be as short as possible fitting into a couple of pages, but should be appreciated as original studies. This way researchers would get encouraged to perform such studies because they would get rid of writing exhaustive manuscripts which are much more difficult than the experiments! and the journals would happily accommodate them, as they present valuable information, and also the readers would be happy to see the results of a previous study have been confirmed or denied, and if denied what can be the reason, without reading repeated content.
This makes sense to me. In some cases there may be a need for more extensive discussion but probably quite often a much shorter article is enough. I guess the reason this doesn't happen much is probably that people feel they need to make a replication article similar to an original article in style (similar length, try to introduce at least some new aspect, etc).
 

Jake

Cookie Scientist
#32
Reminds me of this one that a faculty member in my department has posted on one of his class webpages:
Required Memorization: The major reason to study statistics

A mathematician, an engineer, and a statistician were hunting big game on the plains of Africa. They sighted a very large rhinoceros who lifted his head, caught the hunters' scent, and immediately charged the trio.

The engineer--ever the practical one--was the first to lift his rifle and shoot. The bullet grazed the left ear of the rhinoceros.

Never to be outdone by a mere engineer, the mathematician immediately raised his gun and fired. The bullet grazed the right ear of the rhinoceros.

The statistician threw his gun to the ground, raised his arms in the air, and shouted triumphantly, "Got him!"

Moral of the story:

Naturally, the rhinoceros, angered at having his ears nicked, trampled the trio to death. Hence, the statistician saved a valuable member of an endangered species. If you too want to save our planet, then become a statistician.
 
#33
This makes sense to me. In some cases there may be a need for more extensive discussion but probably quite often a much shorter article is enough. I guess the reason this doesn't happen much is probably that people feel they need to make a replication article similar to an original article in style (similar length, try to introduce at least some new aspect, etc).
Another reason is that the scholars are obliged to write extensive papers. Journals and universities won't accept a 1500-word article with only a couple of references as an original study, regardless of the huge work behind it. They consider such papers only as short communications which has less than half of the academic point of an original study, discouraging the scholar from doing it in the first place or encouraging him to add copied content to match the requirements of original studies (in terms of style, word and reference counts etc.) once he had done the experiments. The problem IMHO is those requirements.
 

noetsi

Fortran must die
#34
I knew a well known professor who when challenged by his research assistant that he had written a tome for something that could have been explained in a few pages, agreed but noted that no journal would accept such a short article.

There was a professor of stats who had on her door a sign that said: "If this was my last day on earth I would like it be in a stats class."

Below in small print is said: "Because it would seem like it went on forever." :)
 

spunky

King of all Drama
#36
Man... they must take some pretty bad stats courses... Am I the only one that enjoys my courses?
I do as well! I even like to crash classes where i'm not registered in and hide behind someone really tall so i can still take notes and do assignments without anyone asking me what i'm doing there... people do look at me weird, though, especially if i find out about some special topics course around the middle of the semester...
 

Dason

Ambassador to the humans
#38
It was actually the stats professor who said that :)
Point still holds. Professors themselves have taken many courses in their day and if they're talking about the courses they teach seeming like they take forever then I think they're the one doing something wrong.
 

noetsi

Fortran must die
#39
A good point Dason...although its been my observation that professors commonly dislike undergraduate courses (sometimes master's as well) in their field because they find it so basic that its boring to them.

Still its entirely possible they were just joking around.
 

TheEcologist

Global Moderator
#40
"Facts are stubborn, but statistics are more pliable." - Mark Twain

[from linux fortune; tip enter in your .First (in R) the following code system("fortune") - only works in linux though.. so if you have windows now is the time to make the switch]