[August 2014] Monitoring Progress: the ‘Pros’ and ‘Cons’ of Indexes

Publish Date: 
Tuesday, August 5, 2014

Two weeks ago, UNDP released the 2014 Human Development Index (HDI), which ranks countries from ‘best’ to ‘worst’ according to three dimensions of human development. Now over two decades old, it is interesting to think about why the HDI has gained such global appeal and how similar indexes might play a role in ESCR monitoring—a topic we have been considering at the Center for Economic and Social Rights (CESR).


Indexes are made up of single numbers, or composite scores, that are calculated by combining multiple indicators. There are a number of indexes related to ESCR that are featured in the working group’s Resource Library. For example:


Quick and easy comparisons: composite scores allow for useful comparative analyses between countries.  Having a single number can quickly capture the situation in a particular country, which may suggest it is failing to fulfill its human rights obligations. For example, in a 2013 submission on Angola, we noted that it ranked 168 out of 182 countries in Transparency International’s 2012 Corruption Perceptions Index.

Easy to track improvement or decline: indexes published annually or at other regular intervals enable civil society to quickly see changes in a country’s performance, critical for measuring norms like progressive realization or retrogression. We cited the Open Budget Index in our latest factsheet on Egypt, for example. Egypt’s score on budget transparency dropped from 43 out of 100 in 2010 to 13 out of 100 in 2012, implying the budget documents it provided were reduced to ‘scant or none.’

Support advocacy work: For the reasons described above, indexes can play a valuable role in advocacy, particularly ‘naming and shaming’ type advocacy that calls out a country for its poor performance. As described in New Horizons in Economic and Social Rights Monitoring, a ‘stunned silence’ overtook the audience when CESR reported that the U.S. ranked last out of 24 OECD countries on the SERF Index during a presentation at its first Universal Periodic Review (UPR). In the short time allotted for NGO input at the UPR, the composite score effectively portrayed the gross failure of the U.S. to adequately fulfill ESCR for its citizens and the immediate need for policy reform.


Politically controversial: although the ability to rank countries is one of the strengths of indexes, there might also be cases where it creates a dispute rather than opening a dialogue between those advocating change and those being assessed. 

Hides inequality: generally indexes show how a country is performing overall, not how particular groups are fairing. This hides where deprivations are worst and where deeper investigations may be most urgently needed.   

Opaque methodologies: The methodology used to calculate an index is often very complicated! But scrutiny regarding methods is necessary to determine whether the indicators it is based on are valid; whether the data about those indicators are reliable (e.g. whether it is gathered impartially, coded fairly etc.); and whether the score is calculated reasonably (e.g. how the different indicators are weighted).

Simplify a complex situation:  Because they are aggregated to such an abstract level, indexes often render other relevant factors invisible. No number can fully measure a state’s failure to comply with rights obligations; it needs to be contextualized with additional information.


For CESR, indexes have proven to be most effective as a way to offer a general ‘snapshot’ of ESCR-related issues in a country. By flagging deviations from the norm and changes over time, indexes can effectively draw attention to a state’s apparent under-performance. Nevertheless, we’ve been cautious not to overstate the conclusions about a country’s rights compliance that can be drawn directly from composite scores, especially those whose methodology is complex.

It would be great to hear how organizations have used indexes in their ESCR monitoring. Are there other examples to add to the list above? Have you found rankings to be an effective means of monitoring ESCR fulfillment? Do the pros and cons identified here reflect your experience? Are there others that should be mentioned? In what ways can we maximize the benefits and minimize the challenges of using indexes to monitor ESCR?

Allison Corkery
Working Group(s):