The h-index (short for highly cited index) was developed in 2005 by Professor Hirsch, a condensed matter physicist at the University of California in San Diego, to qualify the impact and quantity of an individual’s research performance.
The index is a measure of the number of highly impactful papers a scientist has published. The larger the number of important papers, the higher the h-index, regardless of where the work was published.
The h-index can therefor be regarded as a measure of the number of publications published (productivity) as well as how often they are cited (impact).
A scientist has index h if h of his or her NP (number of papers) papers have at least h citations each and the other (NP – h) papers have fewer than h citations each.
E.g. a h-index of 20 means the researcher has 20 papers each of which has been cited 20+ times.
(Image source: Benchfly weblog, 20 Oct 2010)
In general you can only compare values within a single discipline. Different citation patterns will mean for example an average medical researcher will generally have much larger h-index values than a world-class mathematician!
Also if you are comparing people all h-index values need to be found using the same database, and using the same method.
The h-index may be less useful in some disciplines, particularly some areas of the humanities.
It relies on citations to individual papers, not the journals, which is a truer measure of quality.
It is not skewed by a single well-cited, influential paper (unlike total number of citations would be).
It is not increased by a large number of poorly cited papers (unlike total number of papers would be).
It minimizes the politics of publication. A high-impact paper counts regardless of whether it was published in the top-tier journals.
It’s good for comparing scientists within a field at similar stages in their careers.
It may be used to compare not just individuals, but also departments, programs or any other group of scientists.
It counts a highly-cited paper regardless of why it’s being referenced e.g, for negative reasons.
It doesn’t account for variations in average number of publications and citations in various fields (some disciplines traditionally publish and cite less than others).
It ignores the number and position of authors on a paper.
It limits authors by the total number of publications, so shorter careers are at a disadvantage.
It has relatively low resolution in that many scientists end up in the same range since it gets increasingly difficult to increase the h-index the higher it gets (an h-index of 100 corresponds to a minimum of 10,000 citations).
It, like all metrics, is based on data from the past and may not be a valid predictor of future performance. However, in a follow-up publication Jorge Hirsch demonstrated that the h-index is better than other indicators (total papers, total citations, citations per paper) at predicting future scientific achievement.