Acessibilidade / Reportar erro

EDITORIAL

Brazilian scientific journals: visibility and charm

Mauricio Rocha e SilvaI

IEditor of Clinics. São Paulo, SP, Brazil. mrsilva36@hcnet.usp.br

When we address our scientific journals, it is important not to forget that they fall into two categories: with and without peer review. Only the former should be taken into consideration. Altogether, according to the Coordination for the Improvement of Higher Education Personnel (CAPES), there are approximately six thousand journals, but only 8% publish original, peer reviewed, science. Before anyone considers this atypical, it is worth noting that Ulrich's large, international archive lists about 300,000 scientific journals worldwide, of which only 27,000 are peer reviewed. Therefore, we are well within the global mean. As far as visibility is concerned, a different divisor is used: last century and the new millennium. Until 1999, most of our journals were completely invisible and only a very small handful was entitled to have their articles cited. No Brazilian journal has ever reached an impact factor greater than 1.000 and very few reached 0.500! With the new millennium everything changed, thanks to SciELO and PubMed. The creation of a virtual library with free access (SciELO) available on PubMed means that anyone connected to the Internet can search and obtain a free download of any article from our journals. Naturally, "peer review" is the sine-qua-non condition for a paper to be accepted on SciELO. The magic did not take long to occur. In 2004, two Brazilian journals reached an impact factor above 1.000 and never went back. Only six years later, we now have a journal with an impact factor of more than 2.000, and thirteen above 1.000. The estimates for 2011 suggest we will have two journals above 2.000 and approximately fifteen above 1.000. The download of scientific articles published in our journals increased from 100 thousand in 1999 to over 100 million in 2010.

Our journals are visible and the articles published in them are read and cited as never before. There is no doubt that we are more attractive, but we could, however, be more charming! Unfortunately, the obstacle to meeting that goal is an internal one. The evaluation system of the articles published by our graduate studies programs does not offer much help. Any student or adviser that publishes in a Brazilian journal suffers a 20 to 60% reduction in their grade for having published at home. The article classification system is based on an idea dangerously filled with bias: the score assigned to the article is a function of the impact factor of the journal in which it was published. But we are not the only ones guilty of this sin: several international agencies dedicated to either classification or distribution of grants also commit this sin. The father of the Impact Factor himself, Eugene Garfield, has made this observation: the impact factor reflects the importance of the journals, but never the importance of the articles published therein, because the distribution of citations in the articles is tremendously asymmetric.

In a recently published study(1), I examined approximately 7,000 articles published in 60 journals with an impact factor between 1 and 50, and I observed that they all follow a Pareto Principle distribution: in its original form, the referred principle states that in every human activity, 80% of the effects come from 20% of the causes. Regarding the citations in scientific journals, the situation is not quite as extreme, but the asymmetry is deep: 50% of the citations promote 20% of the articles (the most frequently cited) while only 3% of the citations promote the 20% of articles less cited. What is the consequence of this asymmetry on the indirect judgment of the articles? Very simple: no matter where an article is published, it has close to a 30% chance of being more frequently cited and about a 60% chance of being less frequently cited than what is suggested by the journal's impact factor. If an author is able to publish in the highest category of the evaluation system, he or she will certainly not be undervalued because they have already received the maximum score. But there is a 50% chance of being overvalued, which is indeed fortunate for him/her! The problem becomes much more complicated if the article appears in one of the categories falling below the highest: in this case, our authors face a quite significant (30%) risk of being undervalued because they will be more cited than what is suggested by the journal's impact factor and should receive a higher score. This is their misfortune. The risk of an overvaluation remains at 50% (lucky them!). Only 20% of the articles are correctly evaluated. There is a name for this: Positive Predictive Factor (PPF), which defines the percentage of the appropriate evaluations performed by any diagnostic test. Any system that evaluates scientific publications considering the journal's impact factor has a PPF of 40% for journals in the highest category and 20% for those in the lowest. In other words, a 60% error in the highest category and an 80% error in the lowest!

Now let us apply this concept to our journals. In the evaluation system of Brazilian graduate studies programs, none of the Brazilian journals belong to the highest stratum: consequently, the 20% Positive Predictive Factor applies to all of them, which means that 80% of the articles published therein are incorrectly (and unfairly) evaluated. In 50% of the cases, the concept of luck certainly applies, but about 30% are given a lower score. One of the arguments used by every evaluating institution is that for recently published articles it is impossible to make a correct evaluation regarding how many citations they will receive. Let us take an absolutely extreme example: how many citations will an article published in December of 2011 have received by June of this year? It is almost certain that the answer is none, regardless of whether the journal in which it was published has a factor impact of 1, 5 or 50! If the period evaluated is a bit longer, say June 2011 to June 2012, it will already be visible and likely cited. The latest idea is to use a newly-proposed procedure, which I named Continuously Variable Rating(1). The old method, let us not forget, has an 80% chance of yielding an incorrect result.

Before anyone thinks that my purpose is totally negative, I insist on stressing what I have repeatedly said: Brazil's debt to CAPES is inestimable. Scientifically, we are who we are, one of the G20 countries of science, thanks to political will and everything that CAPES has done for this country. Seventy years ago we were a scientific dwarf looking up at Argentina, green with envy. Today we produce as much science as Mexico, Argentina, Chile, and all the other Latin-American countries together! Much of this is thanks to the CAPES graduate studies system, the CAPES portal and the CAPES dynamics. Let us, thus, continue to hope that CAPES will correct this odd inconsistency that stands out from the rest of the CAPES programs, just as a missing front tooth detracts from overall beauty!

References

1. Rocha-e-Silva M. Continuously Variable Rating: a new, simple and logical procedure to evaluate original scientific publications. Clinics. 2011;66(12):2099-104.

  • 1. Rocha-e-Silva M. Continuously Variable Rating: a new, simple and logical procedure to evaluate original scientific publications. Clinics. 2011;66(12):2099-104.

Publication Dates

  • Publication in this collection
    20 Mar 2012
  • Date of issue
    Feb 2012
Universidade de São Paulo, Escola de Enfermagem Av. Dr. Enéas de Carvalho Aguiar, 419 , 05403-000 São Paulo - SP/ Brasil, Tel./Fax: (55 11) 3061-7553, - São Paulo - SP - Brazil
E-mail: reeusp@usp.br