View Single Post
Old October 2, 2003, 05:14 PM
Arnab Arnab is offline
Cricket Legend
Join Date: June 20, 2002
Posts: 6,069
Default Here is the explanation from PWC

The Philosophy behind the Ratings

We’ve had a large number of emails recently with questions about the PwC Ratings and what we do or don’t take into account. For example, you have written to ask why we don’t take account of where the match is played, dropped catches, exactly which bowler the batsman was facing for each delivery, the stage of the innings when runs were scored…and so on. In theory, it would be possible to take account of all these factors, and many more. So why don’t we? The short answer is that if you are trying to rate cricketers statistically without including subjective assessments (eg was that a dropped catch or not?) you have to stop somewhere. We could include more factors, but we have taken the rating of cricketers about as far as we believe is credible. That’s only our opinion, and debate and discussion is a healthy and essential part of this whole matter of rating cricketers.

For those who want a more detailed understanding of our philosophy behind the ratings, the following is a rather longer discussion. We are, as ever, interested in your comments, but we apologise in advance that we are unable to respond in any detail to all your emails.


In designing cricket rankings, the question ‘what are they for?’ is critical. Most people broadly think that rankings are there to pick out who are the current ‘best’ players But what does ‘best’ mean? Does it mean the player with the best technical ability? Or the player who is currently in the hottest form (regardless of what he was doing a year ago)? Or is it the player who has the best career record, even if he is currently going through a bad patch? Each of these definitions requires a different type of ranking.

So what type of ranking is ours? The PwC Ratings are designed to put more emphasis on what a player has done in his recent matches than on what he did earlier in his career. This means that they will tend to reflect the players who are in form. However, ours are not ‘form ratings’ as such (we would understand form ratings to mean you should only take into account recent matches, whereas the PwC Ratings take into account every match a player has ever played). A better way of understanding the PwC Ratings, and our meaning of the term ‘best players’, is to view them as attempting to measure:

"Which players (if fit) would be selected for a World XI to play a match tomorrow."

In designing our Ratings, we have produced a system that ensures that players with a sustained run of good form (such as Michael Vaughan) can rise to very high rankings despite modest records earlier in their career. At the same time, established great players like Inzamam-ul-Haq or Chris Cairns who might play very few matches in a year due to injury, or simply due to circumstances beyond their control such as tours being cancelled, do not plunge to unreasonably low Ratings.


Traditionally, in assessing ‘best’ cricketers, commentators have tended to look purely at performances in Tests. With the growing influence of ODI cricket, we don’t believe this view is appropriate.

We do, however, regard Test and ODI cricket as different forms of the game. Some cricketers excel in only one form of the game (eg Bevan in ODIs, Gavaskar in Tests), and we believe that player ratings should be able to bring out this important element. This is why we believe that separate ratings should be produced for Test and ODI cricket.

However, the ‘best’ cricketers are those who can perform in both forms of the game. It should therefore be possible to combine a player’s Test and ODI Rating, and this can only be done if the points systems are compatible. (If, for example, Test Ratings are scored on a scale of 0 to 100 and ODIs on a scale of 0 to 10,000 it is meaningless to add the two together).

Because of the way that we have designed the PwC Ratings, a meaningful combined Test/ODI Rating is obtained simply by adding the two points tables together, although we have not tended to publish our combined Rating in the past. Sachin Tendulkar, Ricky Ponting and Matthew Hayden would currently be vying to top such a combined Rating for batting, while one of Muttiah Muralitharan and Glenn McGrath would top the bowling.

C. WHAT SKILLS SHOULD BE RATED(batsmen, bowlers, fielders, all-rounders)?

Should ratings be of overall cricketing ability or of individual skills? Because they are such different disciplines, we have always kept separate lists for batsmen and bowlers, in keeping with the historic way in which averages have been presented. However, since there is always an interest in all-rounders, we produce an index of all-rounders as a back-up to our main lists. An all-rounder can be measured simply by adding his batting and bowling points together, but we believe it is nonsense to describe a player who scores 700 points for batting and 0 for bowling as an all-rounder, since he does not bowl. We therefore produce our all-rounder index by multiplying batting and bowling points together. In our points system, a 500bat+400bowl type player therefore ranks higher as an all-rounder than a 600+300 or a 900+0 player.

In rating a cricketer, there is also the key factor of fielding ability. In an ideal world, fielding ability should be included too. However, we do not believe that it is possible to produce a credible rating for fielders or wicketkeepers without an enormous amount of subjective judgment. For example, is it really fair to judge a wicketkeeper on catches and stumpings? What if the bowlers he is keeping to don’t create those chances? Who is going to assess the missed chances, and how does one measure a dropped catch against a catch that a slow keeper didn’t even go for? And when it comes to fielding, how can cover point be compared with first slip?

We believe that an attempt at fielding ratings would simply undermine the more credible ratings for batting and bowling.


We believe that Ratings should, as far as possible, treat all players equally, with no bias, deliberate or otherwise, towards their country or their personal reputation.

Any cricket lover is capable of rating a player, and in doing so will undoubtedly take into account the style of their strokeplay, their charisma and other human factors. A statistical rating cannot do this, and should not attempt to do so. There are no acceptable ways of measuring the quality of strokeplay, and there are no universally agreed criteria for saying that scoring your runs in front of square is superior to scoring them through third man. Nor is there any objective way of assessing the quality of a pitch. We therefore believe that players should only be rated using information available from a scorebook. We believe that to overcome the obvious anomalies of conventional averages, player rankings need to take account of the following:


- number of runs scored

- whether he was dismissed or not

- who he scored his runs against

- the level of run-scoring in the match

(and in ODI cricket, the rate of scoring runs is crucial)


- wickets taken

- runs conceded

- the batsmen dismissed

(and in ODI cricket, the economy rate is crucial)

We believe that a rating system cannot be fair unless a batsman’s runs are adjusted to take some account of the level of run scoring in the match. 100 runs scored in an innings of 600 do not have the same impact as 100 runs scored in an innings of 200, and 100 runs made against the current Australian attack should be worth more than 100 runs against the current Bangladeshis. Likewise, a bowler who dismisses Tendulkar and Dravid deserves more credit than one who dismisses Kumble and Khan.

We believe that if you are to extend beyond simple averages, these factors are the fundamental for a cricket Rating, and they are the underlying factors used in the PwC Ratings.


In addition to the above factors, there are numerous other statistical factors that a rating could take into account. These include:

- the bearing of the performance on the match result

- the exact balls faced from each bowler by each batsman, and the number of balls bowled by each bowler to each batsman

- whether the match was played at home or away.

There is no right answer as to whether these factors should be taken into account or not.

In the PwC Ratings, we made a decision that the figures should, on balance, reward players who made significant contributions in victories. (A player who takes, say, one wicket or scores only 10 runs in a victory gets no bonus from PwC). This means that on balance the Ratings will tend to reflect winners. This is clearly a simplification of the real world, but not only is it relatively simple to define this measure, it also puts a premium on victory for the team as opposed to a player playing for himself.

However, in taking into account cricket’s subtleties you inevitably have to stop somewhere. If taking into account ‘home or away’, how do you allow for neutral venues like Sharjah? What about Lord’s which traditionally seems to favour visitors more than the home team? Or Harare, where there are almost no fans, making the atmosphere unintimidating, as opposed to Kolkata or Melbourne where there are 100,000 partisan spectators?

If allowing for balls faced from particular bowlers, what about the stage of the innings? In his prime, Waqar Younis after 40 overs was twice as deadly as Waqar Younis in the first over. Spinners get better as the match progresses. Some fast bowlers become ineffective when the ball is soft. Do you take into account yorkers and long hops? Where do you stop? We do not think it is appropriate to incorporate a ball-by-ball factor into a weighted mathematical world rating. It risks giving a spurious accuracy when there are so many other subtleties not being taken into account. (We think it would be the equivalent of taking into account the difficulty of each hole in the golf rankings, or the quality of service returns in the tennis rankings.)

Finally, while rating cricketers is fascinating, it is easy to forget that above all sports, cricket involves a huge degree of luck. An inside edge onto the stumps, a poor lbw decision, a dropped catch – one isolated incident can completely change a player’s performance. The more scientific cricket ratings try to become, the more this element of luck becomes a spoiler, since it can override all considerations of the state of a pitch, the nature of the ball delivered and so on. Ironically, therefore, we believe it is possible for ratings to go too far, and in producing the PwC Ratings, we have gone as far as we believe is sensible in producing a scientific method in a game of luck and unfairness. And will we end the debate about who’s the best? Never.


In the different international sports there are various ways in which ratings are calculated. One common method is based on a cumulative system, where all performances in the last 12 months are added together (perhaps with previous years being added at a discounted rate).

There are various reasons why we reject this approach for cricket. The most important is that it favours players who get the opportunity to play more matches. In golf, tennis and other sports, the competitors can choose how many tournaments they play. In cricket, however, a player is limited both by the number of matches that his country has scheduled and also by the whim of the selectors who may drop him despite his good form and willingness to play. We don’t believe that cricket ratings should penalise a top class player who has the misfortune to play for a country that is too strong for him to secure a permanent place in the team, or a country that is in political turmoil and is unable to play many matches.

The method that we have adopted is the weighted average. This is much less susceptible to a player having an enforced absence for several months, and has other subtle advantages in the way it makes upward/downward movements of the ratings more predictable.


Any cricket follower is familiar with the idea that a batsman who averages over 50 is first rate, and a bowler who averages over 35 has only a modest record. These figures have meaning, and help followers to get a quick appreciation of a player’s quality.

We believe that Ratings should be the same. A certain number of points should mean something, and should be comparable with the points achieved by players in the past.

The PwC Ratings are designed to be in the range of 0 to 1000 points. 900 is ‘Bradmanesque’, 700 usually puts a player in the world top ten, 500 plus is the break point between established performers and newcomers or strugglers. These figures apply historically, too, allowing some degree of historic comparison. We think this gives the ratings considerably more quality and value from the follower’s point of view than a system where Dravid has, say, 1,804 points, with no explanation as to whether this is any good, or how it compares with Bradman.


At any time, the Ratings represent our best effort at producing a fair comparison of international cricketers. However, we are learning all the time, and the publication of the full ratings database on the website means the figures have come under far more scrutiny in the last year than ever before. As a result, we have had a considerable amount of intelligent feedback. We would be arrogant if we did not to listen to this.
Reply With Quote