We all take it for granted that test match experience is vital for teams to do well in competitions. The stats, however, paint a much more complicated picture.

Every time a team is announced, we understand that an important calculation went into the selections centred around either preserving experience, or gaining it. Experienced players might be rested so that they are available for other crucial matches; younger players might be included to build up their test experience. (Steven Prescott started the conversation about test match experience. See his article here.)

In this article, I put our assumptions about the importance of test match experience to the test. I look at 4 recent Rugby Championship tournaments, 4 recent Six Nations tournaments and the last 4 RWC Championships: a total of 12 tournaments in which 72 rugby squads from 11 nations competed.

For those who want a close look at the data, here are the numbers on which I base my analysis.

Experience does not guarantee success

When we look at the most experienced squad in each tournament, it becomes clear that experience does not guarantee good results.

Most experienced sides by test cap and how they fared.
Most experienced sides by test cap and how they fared.

In three tournaments the most experienced side finished 1st, but in two tournaments they failed miserably and finished last. To be fair, in another three tournaments the most experienced side managed to finish 2nd.

This tells us that experience helps to achieve a good performance but certainly won’t prevent a miserable result if the squad lacks the other necessities for doing well.

Top two vs bottom two

We will now expand our look and consider the performance of the TWO most capped teams, and the TWO least capped teams in every competition. There is a good reason for that. In top-level tournaments, the competition is generally tight. Not only is it tight, but often there isn’t much to choose between the top two or three sides.

We know how narrow the margins between winning and losing can be, and that winning or losing can be the result of factors outside of the control of the teams, like a dodgy referee call, a (un)lucky bounce of the ball, a desperate ankle tap, or the infuriating tyranny of camera angles. It makes sense, therefore, to also look at the second most capped squads and the second best tournament results.

We will do the same analysis for the two lowest capped squads in each tournament. If we see a pattern for the high cap sides, and the pattern is reversed for the low cap sides, we know that experience works the way we suspect it does.

Looking at various test cap scenarios.
Looking at various test cap scenarios.

We can now clearly see that the most experienced squads do well, and the least experienced squads don’t. In 6 of the 12 tournaments the most experienced squad finished top 2, and in 7 of the 12 tournaments, the least experienced squad finished bottom two. It’s as close to a mirror image as you will get and confirms our notion that experience helps, and lack of experience hinders.

A closer look

But looking closely, this pattern is not repeated for the second most experienced and second least experienced squads. Performance does not seem to be related to the number of caps unless the number of caps is presumably very high or very low.

We will now test this by looking at the actual number of caps, rather than just the most or least in the tournament.

Extreme levels of experience matter, a lot

The following graphic shows the tournament results of squads for every test cap band. Good (green) and bad (red) finishes seem to occur at almost every level of experience. However, if you focus on the two extremes, the very low experience band (top third of the graph) and the very high experience band (bottom third of the graph), you can see a pattern.

Note: Only starting line-up caps were counted. This is different to test appearances, which will always give a higher number.

Analysis of the extreme

Looking at the very high experience band (29 matches and higher), we can see that squads with this very high level of test match experience do extremely well. The sample had six squads at this level, and four of those went on to finish in the top two of the competition. We see a similar pattern at very low experience levels.

The squads with 17 or fewer test caps on average, do extremely poorly in their competitions. 13 squads fell into this category, 6 of which went on to finish in the bottom 2 of their competitions. Surprisingly, 2 of those very inexperienced squads managed to finish top 2 in their competitions. But in the highly experienced band virtually all the squads finished in the top half of their competitions.

This suggests to me that very high levels of experience are always valuable. Whereas very low levels of experience are definitely not good, but not impossible to work with.

Very high levels of experience are ALWAYS valuable. Very low levels of experience are bad, but not impossible to work with.

The middle range of test match experience (17 to 29 tests caps on average) is neither an advantage nor a disadvantage. In this band, 18 squads went on to finish top 2 in their competitions, and 17 went on to finish bottom 2.

Summary

I believe we can draw the following conclusions from the above analysis:

  • Experience cannot guarantee good results. Even with lots of experience, you can still finish last.
  • Experience does not mean much unless you have VERY little, or VERY much.
  • If you have VERY LITTLE experience you are at a serious disadvantage, BUT, you can still win a tournament.
  • If you have VERY MUCH experience you will always do well. You will never finish last.

This look at experience confirms what we intuitively know: success depends on doing a range of things well. One factor cannot guarantee success. But, if we understand the relationship between a factor and success, and we make sure we meet the minimum requirements of a factor, we will give ourselves the best chance of success.

Looking at the test experience factor, my advice for coaches would be to aim for the 29+ average test cap mark.

Author: Willem Van Rensburg

I was raised among Springboks, then matured among Kiwis, and now live among Wallabies. What’s next? I have never been good at playing this game, but what a game! Show me any other team sport that has equal room for the big, small, quick, slow, smart, not so smart. And when they work in unison it is like watching a symphony.

4 COMMENTS

  1. Is the data wrong? It appears to show that NZ won in 2003. England won the RWC 2003 tournament and had the most caps.

    Does this error potentially skew the whole arguement? Doesn’t t show that in RWC it is almost always the team with the most caps that makes the final?

  2. Thanks for the comment Glen. The data table shows tournament rank based on points difference in the knock out phase. In 2003 NZ had the highest points difference, but didn’t make the final. I prefer points difference because it’s a more accurate measure of a team’s performance. My whole argument is that we need to look broader than just who won the tournament. I want to test the relevance of experience for doing well in a tournament, which might include a second place or worse. Looking back, who won and who lost doesn’t help us to measure ‘performance’ in a knock-out competition like the RWC.
    Re your idea that the finalists in the RWC almost always have the most caps, the data shows that not to be the case:
    RWC2003: Engl (most caps) v Australia (5th most caps)
    RWC2007: SA (5th most caps) v England (4th most caps)
    RWC2011: NZ (3rd most caps) v France (4th most caps)
    RWC2015: NZ (most caps) v Australia (3rd most caps)
    Which is why I argue that we should rather look at the absolute number of caps, not the relative number. If you do that, you notice that caps doesn’t matter much at all, UNLESS you have extremely many, or extremely few.

  3. I think the statistics are misleading because the strength of the sport in a country has a big impact. For example, without wishing to be disrespectful, Italy haven’t won, and are not likely to win, the 6N regardless of whether they field their most experienced or inexperienced team. I think these statistics may be interesting if you qualify the comparisons by restricting them within a group of similarly performing teams. For example, you have the big 3 – New Zealand, Australia and South Adrica (all multiple WC winners). How does the experience factor count between this group? Similarly the NH4 – England, Ireland, France and Wales. All are 6N winners and usually make the final 8 of the WC, and they are generally on par with each other. How important is the experience factor when you look at their relative successes? (Note I exclude Scotland and Italy. Argentina could possibly be compared with the NH4 except that the statistical basis is reduced to RWC only as this is the only competition they meet in).

  4. I agree with the premise of your argument Carol. That’s why I only considered the RWC knock-out phases, because they presumably included comparable teams. I totally agree that Italy does not belong in the club, but for the rest I feel they have all progressed to the knock-outs in recent RWC’s, so deserve to be included in the comparison.
    Remember, I am not trying to assess the correlation between experience and tournament winners. I am looking at the relevance of experience for ‘doing well’ in a tournament, which means, for me, that the team has scored well and defended well (points difference). NZ in RWC2003 is a classic example. They came 3rd, but they performed extremely well looking at their points difference.

LEAVE A REPLY

Please enter your comment!
Please enter your name here