A week ago, I thought I had successfully finished building the MATLAB script that will automatically present my dot stimuli. I had sorted out the last few bugs that were messing with the subject response key presses and had figured out how to fuse the subject’s vision so that each eye appeared to be receiving the same stimuli although, of course, they were not. I had tested it multiple times on myself without a hitch and even run the whole experiment successfully on a friend without any complications. Then, a graduate student in the lab suggested I make the dot stimuli slightly larger to account for the larger computer monitors I was using as part of my stereoscope. While this was a simple enough fix, it got me thinking: How do you know when you have finished editing your methodology?
There are a massive number of research papers out in the world today. Sure, that number shrinks as you specialize more and more – in my case, focusing on neuroscience papers pertaining to the superior colliculus and its role in visual cognition – but there are still a large number of papers, and thus methodologies, to choose from. Without a way for researchers to go back and comment on the validity and feasibility of their varying approaches to the same problem, it can be difficult to pick and choose what parts of their methodologies you should adopt in your own experiment.
For now, I have been mainly avoiding this problem by deferring to the opinions of those senior to me – my advisor, the graduate students and post-doctoral students in her lab. However, there may come a time when I am in a more senior position, myself, and have to advise others on the experiments that they are running. While I hope that by that time I have enough experience to advise them well, I also hope that by then there is a more objective way of distinguishing the relevancy of papers than just experience for even the most knowledgeable can make mistakes.