Via WaPo's Wonkblog comes the definitive guide to critiquing research findings that rub you the wrong way. And while this chart refers more specifically to studies on things like health and budget policy, it works surprisingly well for scientific studies, as well.
Whether the research was conducted nationally or locally, for example, is not unlike asking whether the test subjects in a psych study were Western, educated, industrialized, rich and democratic (aka WEIRD). The question of whether the research findings generally mesh with those of previous investigations? Science historians have written entire books on this.
Some things we'd add: Does the experiment have a small sample size? Has the lead scientist committed scientific fraud in the past? Is the investigation predicated upon widely cited, but totally bunk, science?
What arguments would you like to see added to the diagram? Remember, as Wonkblog's Dylan Matthews points out, the essence of this flowchart is that its listed criticisms are completely valid, "but given that people tend to read what they want to read in research, these points tend to be used more as bludgeons than as good faith critiques." Just because an experiment has a small sample size, or cites bunk science doesn't necessarily mean it's wrong – the same way a researcher's prior indiscretions don't preclude her from doing good science. In other words: try to avoid arguments like "well duh, that study is obvious," or "why are we spending money on this crap and not on cancer resarch?"