The Like of Science – A New Impact Factor in the New Publication Landscape 3


The measurement of Impact Factor – how many citations a publication or a researcher is able to attract, is one of the most controversial yet most widely used quality indicators in science. This single number is the basis of decisions on funding policy, evaluation of research quality, scholarly appointments and comparative analysis of research excellence between schools and departments. It is a simple, yet extremely blunt tool, the same way grades are blunt tools for evaluation of school quality. But in contrast to a school, where grades are set mainly by testing knowledge of known concepts, in science the Impact Factor judges quality of research in new fields according to how it is received by peers, writing new publications. But recently, the area for publication has changes drastically.

The Cost of Publication Matters

Publishing scholarly findings is a protracted process. Publication of scientific material is expensive, slow, closed, ineffective and the incentives of publishing are misaligned with the vision of research itself – to spread knowledge. Preparing a publication requires a fair amount of editing and the uncertainty of being accepted may refrain many academics from publishing partly completed projects, negative findings or repetitions and verification of already published studies. Often this so called grey literature ends up in the desk drawer as is unlikely to pass the quality threshold for journal publication. As the number of publications is an important indicator for scientific career development, this may even discourage researchers from undertaking research projects with uncertain outcome, abandoning the chance of groundbreaking discovery for a solid track record in a well established field. What scientist has not been frustrated over the slow, tedious process of publishing, especially in the fields of transdisciplinary science, where the challenge is to match the expectations of the editor as well as that of scientific quality and where much time is spent tracking down the journal which provides the best forum for a particular topic. The high cost of publishing also refrains researchers from spending the time writing up and publishing unless strictly necessary, as a means to procure more funding or approaching a funding review. This throttles publication efforts unless they are likely to lead to more funding. In many cases, it is even cost efficient to prepare data and experiment for publications, rather than for open investigation of a hypothesis.

A New Publication Landscape

A number of emerging publishing platforms, F1000, Figshare and The Winnower are challenging the old way to publish scientific findings. Instead of a lengthy process involving submission, review and possibly multiple iterations before publication, these initiatives offer instant publication with the ability for peer commenting and review post publication. Peer review is no longer used as a tool to weed out low quality science, but as a means of scoring points. Benefits of this new publication Modus Operandi are many:

  • Shorter time between result and publication.
  • Publication of partially finished results.
  • Publication of guidelines, SOPs, test protocols, amateur and citizen science, discussions, essays, short articles and other scientifically relevant grey literature.
  • Publication of findings in emerging crossover research fields.
  • Publication of research which has for some reason been halted before completion (researchers changing roles, insufficient funding, lack of time etc).
  • The possibility for other researchers to pick up and continue unfinished experimentation.
  • Publication of more negative findings, giving us the opportunity to learn from the past.
  • The ability to demonstrate a track record to support funding applications.
  • Increased exposure of scientific activities, preventing the same research approach to receive funding twice or being accidentally repeated.
  • Ensuring that all outcomes of a funded project are indeed published.
  • Portraying a more realistic picture of science, not biased by visibility of only successful or impactful science with positive findings.
  • Higher incentive to publish what does not fit mainstream science.

Other platforms go even further and present the possibility to crowd-research: Authorea, Experiment, Zooniverse, Useed and Consano. The new publication methods are endorsed by several funders, including Horizon 2020, NIH and the Wellcome Trust, which has even created it’s own publication platform, Wellcome Open Research.

A Popularity Contest?

The scientific blind review process is intended to ensure impartiality. Unfortunately, the narrow field of many scientific fields makes it easy enough to determine who an author of a masked paper is, and even who made the review. An open process of post-publication reviews my actually ensure better transparency and ability to judge who comments on what and why. In a world which moves faster and calls for quicker responses, this is an appealing way to increase scientific rate of overturn. But as more material is being published, a reliable review process becomed increasingly important to help filter content. An interesting parallel to scientific Impact Factor is the measurement of impact in social media. Your achievements on Twitter, Facebook or in the news is measured by how many followers and positive reviews you are able to attract. The question immediately arises:

Who does the scoring and why?

A New Responsibility

In a landscape where editors are no longer making decisions on behalf of the community, the responsibility for quality control is shifted towards the community itself. Science as a community project may change the way we think about and do research and the way we see research and the researcher. Gone is the picture of the elitist in the ivory tower lab, investigating concepts the rest of us have no chance to grasp.

In the future, how will we see the journal paper? As guarantors for quality, they are likely to remain. Perhaps will we – instead of publication – see journals compiling subsets of already published work and placing a quality stamp on them. Perhaps will the Impact Factor change to a more dynamic review score, set by anyone, and not only by our fellow scientists. Will we be as back in school, receiving a gold star from the teacher for our efforts?

 

Related posts


About Anna Leida Mölder

Hi, I am a budding scientist enjoying trans-disciplinary thinking between physics, art and biology. I have too many projects, too many hobbies, too many places and people I love, and too little time! So if you want to get in touch, do it straight away :) I have a science/politics-related blog here (thefirewindow.annaleida.com) and a twitter-feed (#annaleidas) which aspires to updation.


Leave a comment

Your email address will not be published. Required fields are marked *

3 thoughts on “The Like of Science – A New Impact Factor in the New Publication Landscape

  • Juss

    I’m a biomedical bench researcher working at upwork.com and at buycollegeessay.org. This strikes me as another way to further politicise science and control results, and I’m leery of the motives.
    First: We DO propose our research design, methods, expected results, statistical analysis methods, and alternative approaches up front. These are called grant applications, IRB applications, and IACUC applications. We don’t get money or approval to do the work unless panels of scientific colleagues and lay persons agree that the work is well-designed, and valuable.
    Second: You can control publications all you want, private industry (pharmaceuticals) will continue to do their work and publish only what is profitable to them. So an idea like this isn’t going to stop Merck from failing to publish a result that isn’t positive for their product.
    Third: The way bench science REALLY works is you write your grant with your ideas, hypothesis, and methods. It gets funded, and you go into the lab to begin your work. The next day you find your experiment gave no intelligible results, and so you adjust the details of your method to get it to work. If you are doing cutting edge science, you are in a long-term day-to-day battle with nature to find the right combination of methods to give interpretable results. So under this system, would a journal then label you as “unpublishable” because you said you were going to verify that phenomenon by western blotting and you ended up doing it by real-time PCR? Methods change because nature is a fickle mistress who guards her secrets jealously. I’ve never in 20+ years seen a complex study that didn’t require adjustments to the methods as the work progressed. Never.
    Politics today is trying to tell us that scientists are a bunch of biased hoaxters “creating” results to please some sort of liberal overseer with deep pockets. What a bunch of utter crap coming from politicians who answer to special interests with deep pockets.