Why should you have anything to learn about ethics in research if you are a good person who learned early about proper behavior? You know not to take things that belong to other people. You know not to cheat. You know all the clichés, so you do unto others as you would have others do unto you, and you give credit where credit is due. So, what else is there? Can’t we say that honest people will produce honest research?
Unfortunately, it is not that simple. We are complex creatures who may bias things in ways we don’t realize. Some people make a career studying this to fascinating effect, like Dan Ariely. The rest of us ignore their work and the implications for our own research to our peril, even if we are good, honest, fair people.
I suppose we can break it down to three areas: be fair to your data, be fair to other people, and be fair to the environment, including animals if you study them. Being fair to your data means you understand that even the most amazing hypotheses you think must be true might not be. Or your data could have a feature that means they are not testing what you think they are testing. The point is, you can’t let what you think to be true to drive what the data are telling you.
This is harder to do than you might think. We love our hypotheses, but should love them best when we hit them hardest with tests that might disprove them. The best thing to do is to collect your data blind, so you don’t actually know how a given count will affect your end result. Say you think that young mockingbirds nest later than old mockingbirds. The best way to test the hypotheses is to disassociate measuring age from measuring nesting date, bringing the two together only after the measurements are taken.
Randomness is another part of blindness. If you are measuring a few fruiting bodies per plate, as we so often do, put a piece of paper with dots on them and measure the fruiting body closest to the dot, so you won’t inadvertently pick larger or smaller fruiting bodies, depending on the question.
Another place randomness is needed has to do with inadvertent factors. Don’t score all of one treatment in the morning and another in the afternoon. Don’t put the cages of animals receiving treatment A all in a row closer to the window. Intersperse them randomly with the cages of animal receiving treatment B. I suppose if they were really animals, you would also worry about interactions. Don’t put all the test Petri plates on one shelf, and the others on another shelf. Don’t have one person score treatment A and another person score treatment B. Worry constantly about inadvertent effects. You could unintentionally generate a pattern that you waste a lot of time pursuing.
Decide on your experimental design before you do the experiment. Don’t keep running the statistics as you go, to see if you’ve reached significance. This is a particularly notorious problem in psychology where students come in to do various experiments, making the experiments terminate after variable numbers of students. In this particular case, the problem is called p hacking and there is a huge literature on it. I had not heard of it until Michele Johnson told me about it. People graph the studies of researchers to see if they get a p value just above significance, 0.05 usually, more often than expected by chance.
Well, that was a little bit about being fair to your data. Being fair to other people and fair to the environment will have to wait. They are also complex.