Wolfgang G. Stock of the Department of Information Science at the Heinrich-Heine-University in Düsseldorf has an interesting essay titled "The inflation of impact factors of scientific journals" coming out in ChemPhysChem (subscription required). In it he makes the point that impact factors (IF), h-indices and even eigenfactor scores for journals are flawed and really not very informative. For starters, he points out-
"For example, the British library holds more than 40000 scientific serials and adds about 800 new journals each year, whereas the two most comprehensive multidisciplinary databases, namely Elsevier's Scopus and Thomson Reuter's Web of Science (WoS) cover only 16000 (Scopus) and 10000 (WoS) periodicals."
Note that because of the differences in periodicals surveyed by Scopus and WoS, the IF's obtained from these two sources often disagree on the IF for a given journal.
"All indicators that work with relative frequency measures (ie, all Group 2 indicators) suffer from serious statistical problems. It is a precondition for calculating average values (in our case: average cites per publication) that there is a Gaussian distribution... In journal informetrics this is not the case."
Seems like a serious flaw to me...
"In no case is it possible to use a journal impact factor on the article level to evaluate the influence of an article, an author or an institution."
Of course we all knew that.
Wish the bean-counters did...