Archive for category Fun

Australian Rules Football and ELO Ratings

The ELO rating system is widely used in professional individual and team sports to compute relative rankings of teams or individuals. I had some free time on the weekend and wrote a fairly simple MATLAB script to compute the ELO rating for each team in the Australian Rules Football League (AFL). I used the data files from AFL Tables as input to the MATLAB code. The table below lists all the ELO ratings as well as some basic statistics, such as the total number of wins, losses and draws for each team.

1 Hawthorn 1475 869 951 10 1830
2 Geelong 1469 1224 1042 21 2287
3 Collingwood 1468 1460 910 26 2396
4 Sydney 1428 1061 1199 23 2283
5 West Coast 1360 339 269 5 613
6 Brisbane Bears 1319 72 148 2 222
7 Adelaide 1316 269 241 1 511
8 St Kilda 1311 877 1338 25 2240
9 North Melbourne 1293 803 1005 17 1825
10 Fremantle 1292 168 236 0 404
11 Carlton 1253 1385 939 33 2357
12 Essendon 1194 1315 973 34 2322
13 Richmond 1170 1058 1035 22 2115
14 Brisbane Lions 1147 192 175 6 373
15 Western Bulldogs 1099 803 976 22 1801
16 Melbourne 1032 1034 1209 21 2264
17 Port Adelaide 1032 183 181 5 369
18 GW Sydney 1016 2 20 0 22
19 Gold Coast 992 6 38 0 44
20 Fitzroy 871 869 1034 25 1928
21 University 781 27 97 2 126


  • I used the results of all 14,166 games played in the VFL/AFL since 1897 to create the table.
  • The ranking scores of the top three teams are really close, but Hawthorn has edged out Collingwood and Geelong.
  • The ELO of each team was initialised to 1200; the only exception to this rule were the Brisbane Lions which inherited the final ELO score from the Brisbane Bears. This seemed reasonable given the details of the “merger” between the Brisbane Bears and Fitzroy Lions.
  • The team University withdrew from the competition for World War 1 and never came back.
  • All ratings were updated using a constant K-factor of 32.
  • My MATLAB script and the data file used to create this table are freely available for download here.

It would be of interest to use this data for prediction of the AFL 2013 season, perhaps using a methodology similar to the ELO++ work.

No Comments

Analysing cancer data with a supercomputer

Yours truly has recently appeared in an article published in the local newspaper, The Age. The article discusses our NHMRC grant to use the IBM Blue Gene/P supercomputer for processing and analysis of SNP data. Unfortunately, I didn’t get a chance to proof read the story and ended up being referred to as a “computer whiz” and a talented “games programmer”. Oh well, any publicity is good publicity, right?

No Comments

Hypothesis testing with Paul the Octopus

For the past four weeks, I’ve been enjoying the FIFA World Cup 2010, the most watched television event in the world. This world cup is held in South Africa making it the first time ever an African nation hosted the prestigious tournament. One of the surprise teams of the tournament has been Germany, beating both England and Argentina (4-1 and 4-0 respectively) before losing 0-1 to current European champions Spain in a tightly contested semi-final encounter.

Meanwhile in Oberhausen, Germany, a somewhat odd event took place before each of the Germany matches. Paul the Octopus, who resides at the local Sea Life Aquarium, was used as an oracle to predict the outcomes of all Germany world cup matches prior to the games taking place. For a description of exactly how Paul makes his predictions, see this Wikipedia article. Amazingly, Paul has successfully predicted all six of the German games so far and has recently tipped Germany, to the delight of many Germans, to beat Uruguay at the upcoming game for 3rd/4th place. This should hopefully put a stop to those anti-octopus songs and calls to eat Paul. As statisticians, let us ask the question “Is Paul really an animal oracle or just one extremely lucky octopus?”.

We can model the number of Paul’s successful predictions at this world cup as a binomial distribution B(p, n=6); that is, we have six independent trials (matches) with p being the probability of success (Paul predicting correctly) at each trial. In order to test whether Paul is psychic, we shall construct a 95% confidence interval for the probability of success, p. The standard confidence interval, often called the Wald interval, is known to have poor coverage properties in this scenario and exhibits erratic behaviour, even if the sample size is large or p is near 0.5. Instead, we compute the modified Jeffreys 95% CI, recommended in [1], and find that

CI_{M-J} = [0.54, 1.0]

This CI is quite wide, which is not unexpected given such a small sample size (n=6), and excludes the possibility that Paul is just plain old lucky (p=0.5)!

What can Minimum Message Length (MML) and Minimum Description Length (MDL) tell us about Paul’s psychic powers? We shall use the Wallace-Freeman (MML) codelength formula [2,3] and the Normalized Maximum Likelihood (NML) distribution (MDL) [4] for this task. Let A denote the hypothesis that Paul is lucky, and B the alternative hypothesis that Paul is an animal oracle. We compute the codelength of data and hypothesis for both scenarios A and B, and use the difference in codelengths (i.e., codelength A – codelength B) as a probability in favour of the hypothesis with a smaller codelength. From standard information theory, the codelength for hypothesis A is 6 * log(2) = 6 bits. The codelength for hypothesis B is 2.82 bits using the WF formula and 1.92 bits using the NML distribution. Thus, both MML and MDL prefer hypothesis B.

So there you have it, Paul must be the real deal! 😉

[1] Lawrence D. Brown, T. Tony Cai, and Anirban DasGupta. Interval Estimation for a Binomial Proportion, Statistical Science, Vol. 16, No. 2, pp. 101-133, 2001.
[2] C. S. Wallace and P. R. Freeman. Estimation and inference by compact coding, Journal of the Royal Statistical Society (Series B) Vol. 49, No. 3, pp. 230-265, 1987.
[3] C. S. Wallace. Statistical and Inductive Inference by Minimum Message Length, 1st ed., Springer, 2005.
[4] Jorma Rissanen. Information and Complexity in Statistical Modeling, 1st ed., Springer, 2009.

No Comments

Eurovision 2010 Forecast (Part 2)

The Eurovision 2010 competition finished last Saturday with Lena Meyer-Landrut from Germany taking the title for her song “Satellite”. If you missed the show, you can see Lena performing the song at the official Eurovision YouTube channel here. Given the current state of the world economy, this is a pretty good outcome as Germany is one of a few countries left in Europe with enough finances to host next years show. So how did my team, StatoVIC, go in the Kaggle Eurovision 2010 competition? The results have been tabulated and released here. It looks like StatoVIC took seventh place out of 22 submissions with an absolute error of 2626 points calculated from the predicted ratings. This score is in the top quartile of the submissions and about 1000 points better than “Lucky Guess”, the last place submission (I assume this submission is just a random selection of ratings). Not a bad result for StatoVIC, really. Congratulations to Jure Zbontar for winning the competition with an impressive absolute error score of about 400 rating points less than our team.

It’s time for StatoVIC to look at the HIV progression challenge and see if we can do better than seventh place!

No Comments

Eurovision 2010 Forecast (Part 1)

Last week I submitted predictions for the Kaggle Eurovision 2010 competition under the team name StatoVIC. The first part of the competition requires selecting the 25 countries that will make the Eurovision 2010 final. Once the 25 finalists are chosen, you are asked to predict the voting behaviour of all the participating countries based on 10 years of data collected from previous Eurovision competitions. In this years Eurovision, 20 countries are selected for the final based on the outcome of two semi-finals. In both semi-finals, there are 17 countries competing and the 10 countries with the most points go through to the final. The remaining five countries (Spain, Germany, United Kingdom, France and Norway) are guaranteed final competitors. With the second semi-final finishing last Thursday, it is time to see how the StatoVIC team has fared thus far.

In the first semi-final, I ended up predicting five (Bosnia, Russia, Greece, Serbia and Belgium) out of the ten finalists correctly. In the second semi-final, I fared somewhat better selecting eight of the ten countries that made the final. I missed out on picking Romania and Cyprus and instead chose Croatia and Finland. Given the relatively naive strategy that was used to select the finalists, these numbers are certainly not too bad.

Out of interest, I had a brief look at how you would fare if you were to randomly select all the finalists in any of the two semi-finals. First, the “good” news is that you are guaranteed to select at least three finalists correctly. The odds of correctly guessing all the ten finalists in a semi are unfortunately 1 in 19,448. The probability of correctly guessing five and eight finalists is about 0.27 and 0.05 respectively. In expectation, the mean number of finalists guessed correctly with this strategy is between five and six. In light of this, the performance of StatoVIC is about average in the first semi, and moderately better than average in the second.

The Eurovision final is on this Saturday night, but is shown Sunday night on SBS if you are in Australia. It will be interesting to see how StatoVIC fares in predicting the voting behaviour. In the mean time, here are the predictions of the fine folks at Google:

No Comments

Machine Learning Competitions

For those of you interested in testing your machine learning skills, you may want to check out Kaggle. Kaggle is a new web host for free-to-enter, online machine learning competitions. Kaggle claims to offer two types of competitions: predicting the future and predicting the past. In order to take part in a particular competition, all you need to do is register for an account and follow the relevant competition instructions. Kaggle is currently hosting two competitions: (1) prediction of HIV progression, and (2) prediction of Eurovision song voting (!). The prizes offered for the HIV and Eurosong competitions are $500 USD and $1000 USD respectively; not in the same league as Netflix, but who does this stuff for the prize money anyway? There are certainly easier ways to earn $1000 USD.

I’ve just registered for an account and will definitely partake in at least one of the competitions. The name of our team is statovic . Keep an eye out for us on the Kaggle leaderboard!

No Comments