Which nationalities do well in piano competitions? (2)

In my first post on piano competition statistics, I made a few interesting observations relating countries to number of prizes won (per capita) in piano competitions:

  1. Of all Western European countries, Italy wins most prizes in piano competitions.
  2. Belgium scores substantially better than the Netherlands (my home country).
  3. Many top-scoring countries are Eastern European countries (top 7), and more specifically former Soviet states, excluding Russia itself.
  4. The top Eastern Asian country is South Korea.
  5. China ranks very low, so the large numbers of Chinese pianists may currently still be explained by China’s huge population.

These are observations aggregated across the years 2000 – 2015. But what about trends across 2000 – 2015? Are some countries up-and-coming, winning more and more prizes each year? Are other countries in decline? The above observations do not say anything about this. To gain more insight into trends, we need to look at the prizes won in each individual year, without aggregating across all years. As an example, have a look at the prizes won by American contestants across the years 2000 – 2015, visualized as a fraction of the total number of prizes awarded each year:

united-states__prizes_per_year
The United States have been winning more and more prizes between 2000 and 2015. Fraction of total number of prizes awarded each year won by American pianists. The trend is summarized using the Pearson correlation coefficient r (here: r = 0.62), which varies from -1 (clear trend of fewer prizes in recent years) to 0 (no trend) to 1 (clear trend of more prizes in recent years). The low p-value (p = 0.001) shows that the trend is significant, i.e. cannot be explained by random chance. It can thus be considered a ‘real’ trend.

You can see that the general trend is that Americans are winning increasingly many prizes. This increasing trend can be summarized in a single number, the correlation coefficient, which we will denote by r. The correlation coefficient r can take on any value between -1 and 1, with

  • r = -1: meaning a clear trend of fewer prizes in recent years,
  • r = 0: meaning no discernible trend,
  • r = 1: meaning a clear trend of more prizes in recent years.

In the above plot, r = 0.62, indicating a pretty strong positive trend of winning more and more prizes. Moreover, this trend cannot be explained by random chance, as demonstrated by the significantly low p-value (p = 0.001. It can thus be considered a ‘real’ trend. (Technical note; thus feel free to skip: the figure uses the Mann-Kendall test for monotonic trend detection).

Now you are probably curious to see this number r for all individual countries, and not just for the United States, so that you can see which countries are up-and-coming, and which are on the decline. This is precisely what the barplot below shows: for each country, the trend in prize-winning between 2000 and 2015 is summarized in a single correlation coefficient r, and a bar is colored red if the trend is significant (p < 0.05), i.e. can be considered a ‘real’ trend.

prizes_per_year_and_country
Which countries have been winning increasingly many prizes between 2000 and 2015? For each country, the trend in number of prizes won across the years 2000 – 2015, summarized using the Pearson correlation coefficient. For bars > 0, generally there were more prizes in recent years. For bars < 0, generally there were fewer prizes in recent years. For the red bars, these trends could not be explained by random chance.

How does the above plot compare to the five observations stated at the beginning of this post?

  1. Top Western European prize-winner Italy is on the decline. The original observation was that, when aggregating across the years 2000 – 2015, Italy has won most prizes per capita. The above barplot additionally shows that, when looking at the trend across the years 2000 – 2015, Italians have however been winning fewer prizes in recent years (r < 0; p < 0.05).
  2. Are the Netherlands overtaking Belgium? The original observation was that, when aggregating across the years 2000 – 2015, Belgium has won more prizes per capita than the Netherlands. The above barplot additionally suggests that, when looking at the trend across the years 2000 – 2015, the Netherlands may however have been improving a bit throughout the years, although the improvement is not significant (r < 0; p > 0.05).
  3. Top prize-winner Estonia is on the decline. The original observation was that, when aggregating across the years 2000 – 2015, of any country Estonia has won most prizes (per capita). The above barplot additionally shows that, when looking at the trend across the years 2000 – 2015, Estonia is however on the decline and has been winning fewer prizes in recent years (r < 0; p < 0.05).
  4. South Korea is the strongest up-and-coming country. The original observation was that, when aggregating across the years 2000 – 2015, of any Eastern Asian country South Korea has won most prizes per capita. The above barplot additionally shows that, when looking at the trend across the years 2000 – 2015, South Korea can be identified as the strongest up-and-coming country, winning more and more prizes each year (r > 0; p < 0.05).
  5. Beware of China with its huge population. The original observation was that, when aggregating across the years 2000 – 2015, China ranked very low due its large population. The above barplot additionally makes clear that, when looking at the trend across the years 2000 – 2015, the Chinese are however winning more and more prizes (r > 0; p < 0.05).

Finally, a minor note of caution with respect to the interpretation of the above results: It may be that in some cases a small part of the increase in number of prizes can be explained by population growth, which was not taken into account. On a related note: Population size itself is not relevant in this analysis, as we are only looking at the increase (or decrease) in number of prizes, regardless of the population size.

Order of appearance and final ranking in the Queen Elisabeth Competition

The degree to which competitions can be objectively judged depends on the type of competition. For example, athletes competing in a 100m sprint can be objectively ranked by the time they need to cross the finish line. On the other hand, objective judgement is virtually impossible in music competitions, because so much depends on highly subjective personal opinions, which can differ widely across members of a jury. A famous example is given by the 1980 Chopin Competition in Warsaw, where Ivo Pogorelich was eliminated in the 3rd round. This prompted Martha Argerich, declaring Pogorelich a ‘genius’, to resign from the jury in protest. It is also important to note that, since personal opinions play such an important role in judging music competitions, there is an increased risk of biases resulting from non-music related (i.e. confounding) factors. For example, it is often said that the order of appearance affects the final ranking of the candidates. In addition to jury members as a potential cause of these biases, the contestants themselves may also be involved generating these biases, for example when a specific positioning within the program of a competition has a certain psychological effect on a contestant.

The Queen Elisabeth Competition is one of the most prestigious music competitions in the world, and while watching the final round of this year’s edition in May, I was wondering to what extent the order of appearance has influenced the final rankings over the years. To assess this influence, I collected the data on order of appearance and ranking in all Queen Elisabeth Competition finals since 1952 from the website of the competition, in total 216 results for 18 piano competitions. In the final round of the Queen Elisabeth Competition, 12 pianists give their performances spread across 6 days, where each pianist is randomly (!) allocated a specific time slot on one of the 6 days, either before or after the intermission. Hence, one can imagine the final ranking being influenced (‘biased’) by the day of performance (1 – 6), as well as by the order of performance on a given day (1 or 2). This is visualized in the boxplot below, where prizes won by finalists over the years are divided into categories based on the day of performance, and the order of performance on a given day.

prize_from_day_times_program
How does the prize won by a Queen Elisabeth competition finalist depend on the day and order of performance? Each red dot represents a prize won by a candidate. For each day there is a gray box that summarizes the prizes won by candidates on that day: the thick black line within each of the gray boxes represents the typical prize for that particular day/finalist. More specifically, it is the median prize, i.e. 50% of prizes are higher and 50% of prizes are lower than the black line.

Note that for all competitions since 1995, only prizes 1 – 6 have been awarded, and the remaining finalists are unranked. In the plot above, these unranked finalists have been awarded the ‘9.5th prize’ (the average of 7 and 12).

Quite strikingly, the boxplot shows that final ranking is indeed substantially influenced by day and order of appearance. For example, if you’re the first finalist on the first day, this may mean you have lost before you have played even a single note: the 2nd finalist on the last day was typically ranked 5 to 6 places higher than the 1st finalist on the first day! (as measured by the difference in median prize, i.e. the thick black line within each of the gray boxes). At least equally striking is how prize depends on the order of performance on a given day. For example, as a finalist you may be happy about being selected to play on the last day. However, it may not do you any good, unless you are scheduled to play after the intermission: the 2nd finalist on the last day was typically ranked 5 places higher than the 1st finalist on that same day! More generally, the 1st finalist on a given day typically did quite a bit worse than the 2nd finalist on the same day. Moreover, a 1st finalist on days 4, 5 or 6 typically did worse than a 2nd finalist on days 1, 2 or 3.

The above observations imply that if we would want to place bets regarding the final ranking, without having any prior knowledge of music, we may actually be able to do quite a bit better than random guessing. For example, suppose we would want to predict whether a finalist receives a prize or is unranked (or, for competitions before 1995: whether a finalist ranks between 1 to 6, or between 7 to 12). In this case, by random guessing we would expect to classify 6 out of 12 finalists correctly. Guessing more than 6 correctly as receiving a prize or not, is better than expected. (Some technicalities are following, but feel free to skip to the last paragraph of this post.) One way of trying to do better than the expected number of 6 correct guesses, is to use a Random Forest classifier. Doing this in the statistical programming language R, using the randomForest package, gives a specificity of 0.56 and a sensitivity of 0.71, as determined using the predicted classes of the input samples based on out-of-bag samples (p ~= 0.0001 using Fisher’s exact test). Together, these give a generalization error rate of 37%.

Hence, using a classifier it is expected that one would classify 100% – 37% = 63% of finalists correctly as receiving a prize or not, purely based on day and order of performance of the finalists. Note that 63% of finalists amounts to 7 – 8 finalists, which is 1 – 2 more than expected by random guessing. This again demonstrates the substantial bias in the final ranking of finalists in the Queen Elisabeth Competition, induced by day of performance, and order of performance on a given day.

 

Piano competitions and successful careers: the statistics

In my first post about piano competitions, I explained how I collected data from a website describing more than 11000 results from about 3000 piano competitions. By doing some elementary analysis in the statistical programming language R, among other things I confirmed my suspicion that Italians do very well in piano competitions, in terms of number of prizes won per million inhabitants of a country.

However, doing well in piano competitions should not be an end in itself, and there are quite some examples of pianists winning top prizes at major piano competitions without going on to having a successful career. Therefore, one might wonder how predictive doing well in piano competitions is of achieving this ultimate goal: a successful career. To try and answer this question, we first need to think about the two things we want to relate:

  1. What defines “doing well in piano competitions”? We only have data for pianists who actually reached the finals of any given competition, not for those not reaching the finals. Therefore, “doing well” is determined by the ranking of the pianists within each of the finals, i.e. 1st prize, 2nd prize, 3rd prize, etc., and possibly finalists without a prize.
  2. What defines a “successful career”? Obviously, one could think of measures such as “number of concerts per year” or “yearly income”. Needless to say, these data are pretty hard to come by. Therefore, I decided to go with the relatively pragmatic “number of Google hits on the rainy Saturday afternoon of November 28, 2015”, as measured using the Custom Search API from the Google Developers Console for doing the following search: <“first_name last_name” piano>. In other words, the assumption is: the more Google hits, the more success.

So, we will try and establish whether knowing the prize won in a competition allows us to predict the number of Google hits. We will call this the “prize effect”, i.e. the effect of prize on number of Google hits. For example, we can take the Queen Elisabeth Competition, and plot the prize won by each finalist against his/her number of Google hits:

Queen_Elisabeth_Competition
For the Queen Elisabeth Competition (years 2003, 2007, 2010, 2013), the prize won by each finalist against his/her number of Google hits. Note that in a Queen Elisabeth Competition finals, there are 12 finalists, only 6 of whom are actually awarded a prize (prizes 1 to 6). The remaining 6 do not receive a prize, but here I artificially award them with prize “7”.

We can see that 1st prize winners generally have more Google hits than finalists without a prize, so there indeed seems to be a weak trend. (Statistically, this observation is confirmed by the Kendall correlation coefficient of -0.3 and corresponding p-value of 0.0055)

OK, simple enough, right? Well…., maybe not. These are only results for the Queen Elisabeth Competition. What if we want to assess the same trend, but based on all competitions simultaneously? Now we have a problem, because some competitions are more prestigious than others. In other words,  you might expect someone coming in 3rd at the Van Cliburn Competition to have better chances of having a successful career (and thus more Google hits) than someone coming in 3rd at the Premio de Piano Frechilla-Zuloaga, simply because the Van Cliburn Competition is a more prestigious competition. We will call this the “competition effect”. Also, it is not unlikely that the number of Google hits in November 2015 is influenced by the year in which a competition was won. We will call this the “year effect”. So what we want to do is to determine the “prize effect” on the number of Google hits, while correcting for the “competition effect” and the “year effect”. (Now we’ll get into some technical details, but feel free to skip these and head directly over to the conclusion to learn whether prize indeed predicts a successful career) Fortunately, there is a class of statistical models called mixed models that can help us out here. More specifically, we’ll use the lme4 R-package to construct a mixed model predicting number of Google hits from a fixed effect “prize”, a fixed effect “year”, and a random effect “competition”. Suppose that data.frame K0 contains our data, namely columns:

  1. nhits: number of Google hits
  2. prize: the prize won
  3. year: the year in which a prize was won
  4. competition: the name of the competition

Then one way of determining whether the prize effect is significant when taking into account the competition and year effects is the following:


# Log-transform the number of Google hits, to make it a bit better 
# behaved in terms of distribution. To make the log-transform work, 
# we first need to add a 'pseudocount' of 1, so as to avoid taking 
# the logarithm of 0.
K0$nhits <- log10(K0$nhits+1)
# Make "year" into a factor, such that it will be treated as a 
# categorical variable. 
K0$year <- factor(K0$year)
# Train the null model, predicting nhits only from competition.
fit.null <- lmer(
	nhits ~ year + (1|competition),
	data = K0,
	REML = FALSE
)
# Train the full model, predicting nhits from prize and competition.
fit.full <- lmer(
	nhits ~ prize + year + (1|competition),
	data = K0,
	REML = FALSE
)
# compare the null model with the full model by performing a 
# likelihood ratio test using the anova() function
anova(fit.full,fit.null)

Note that year is treated as a categorical variable. This is because it is likely that the effect of year on nhits is nonlinear. Indeed, I observed this for the Queen Elisabeth competition, with relatively many Google hits for the 2003 and 2013 competitions, and fewer for the 2007 and 2010 competition. This could be explained by the fact that 2013 laureates get more hits due to winning a prize, and that 2003 laureates get more hits because they have had 10 years to establish a career. This nonlinearity makes it more difficult to deal with in linear models. However, the number of years is limited, and moreover we are not interested in assessing the magnitude of the year effect, but only in removing it. Therefore, we can treat it as a categorical variable. The above approach gives p < 2.2e-16 for the difference between the two models, thus indicating that, across competitions in general, prize indeed contributes to the number of Google hits. We can try to visualize the prize effect by plotting prize against the residuals of both the null model and the full model:

mixed_model_residuals
Prize against the residual log10 number of Google hits (“nhits”), for the null model as well as the full model.

This demonstrates that there is a significant correlation of prize with residual nhits in the null model that is removed when including prize as a predictor variable in the full model. This indeed indicates a trend across competitions and years for mildly more Google hits with winning top prizes. Also evident from this plot is that the residuals may not be normally distributed, thus violating one of the assumptions of mixed models. This is even more clearly seen in a Q-Q plot, below for residual nhits of the full model:

QQ_plot
Q-Q plot of the log10 residual number of Google hits of 1st prize winners, according to the full model.

If the residuals were normally distributed, they would more closely follow the dashed line. Thus, the mixed model as described above may not be entirely appropriate for our current purpose. Therefore, in order to determine the significance of the prize effect, we may want to replace the likelihood ratio test with a non-parametric test, such as a permutation-based test. Doing this, as it turns out, also gives a significant result, namely p < 0.001 using 1000 permutations. Thus, prize can predict number of Google hits, at least to a certain extent. This is also indicated by the highly significant non-parametric Kendall correlation of prize with residual nhits in the null model. However, at -0.081 the magnitude of this correlation is fairly small. Note of caution: strictly speaking we have not established that winning a top prize actually causes a higher number of Google hits; we have only established an undirected association between the two. Nonetheless, considering that all competition results preceded the retrieval date of the number of Google hits, typically by 5 to 10 years, this is by far the most likely interpretation.

The above observations lead to the following conclusion: if you are a finalist in a piano competition and win a top prize, you do seem to have better chances of having a successful career than a finalist not winning a top prize, at least in terms of number of Google hits. However, this “prize effect” is remarkably small when observed for piano competitions in general.

Which nationalities do well in piano competitions?

Yesterday, I was listening to Guido Agosti’s awesome piano transcription of Strawinsky’s The Firebird, in a great performance by the Italian pianist Francesco Piemontesi. I remembered Francesco Piemontesi from coming in third at the 2007 Queen Elisabeth Competition in Brussels, and I was thinking: how come it seems that Italian pianists always do so well in international competitions, compared to many other European nationalities? It struck me that this was actually an assumption, as I had never seen any concrete numbers on this, at least not across many different competitions world-wide.

Curious as I am, I paid a visit to a website collecting this type of data, to see if I could find any numbers on this. Unfortunately, the website itself provided only very limited functionality for mining the data, by far not enough to answer the question I was interested in. However, being a computational scientist / data scientist (and classically trained pianist), I could not resist: Using Firefox’s Network Monitor, I tracked down the precise HTTP GET request that was sent to query the database for any individual competition. Then, I constructed my own raw HTTP GET requests, and used the computer networking tool netcat to send these in batch to the server. This allowed me to retrieve the results in HTML for about 3000 international piano competitions. I parsed this HTML data mainly using the XML and stringr R-packages (tricky…), downloaded some more data regarding population size by country from Wikipedia, and then did some elementary analysis in R.

Behold, the first results of analyzing this data: the number of prizes per million inhabitants of a country.

Number of prizes in international piano competitions, per million inhabitants, by country
Number of prizes in international piano competitions, per million inhabitants, by country

The results are pretty interesting. Indeed, Italy ranks highest of all Western European countries! Another suspicion confirmed is that Belgium scores substantially better than the Netherlands (my home country). Somewhat surprising is that the charts are heavily dominated by Eastern Europe (top 7), and more specifically by former Soviet states, excluding Russia itself (top 5!). The top Eastern Asian country is South Korea. China ranks very low, so the large numbers of Chinese pianists may currently still be explained by China’s huge population.

Although the results are quite interesting, some caution regarding the interpretation particularly of small differences is in order:

  1. Population sizes are from 2014-2015; competition results are from 2000-2015.
  2. Some countries do not exist anymore. For example, prizes won by Yugoslavian pianists are not counted towards any of the currently existing states that previously made up Yugoslavia.
  3. The number of prizes in a country may somewhat depend on the number of international competitions in that country.
  4. Some number are very low. For example, Mongolia ranks a bit higher than China (!). However, this is based on only three prizes won by Mongolian pianists.

I will probably have another, more sophisticated, look at this data in the near future. To be continued…..