Posted: July 13, 2018 By Kieran Darmody

Beating The Bookies: World Cup Score Predictions by Liberis

The FIFA World Cup final is almost upon us. And although England may be out, it looks like machine learning and statistical scores are in . . Liberis Data Scientist Ed Gent tells us how!

Return to blog posts

The FIFA World Cup final is almost upon us. And although England may be out, it looks like machine learning and statistical scores are in . . Liberis Data Scientist Ed Gent tells us how!

Differing from the usual company sweepstakes, for this year’s FIFA World Cup, Liberis ran a score prediction competition instead; 5 points for correctly guessing the exact result, 2 points for the correct goal difference, or just 1 point if all you managed was to choose the winning team. It was a fun competition for us to all get involved with – but, more importantly, it was a way to prove Superior Football Knowledge.

One week before the tournament kicked off, I decided to put my reputation on the line and use maths to take on the punters by creating a model to go up against everyone else’s unautomated (but probably much less time-consuming) guesses. The wider team don’t always get to see the performance of the credit risk models we build at Liberis, so the stakes were high and I knew I would be judged by the results of my score predictor.

The Highs, The Lows

We had some benchmarks thrown into the score predictions to judge our efforts against; notably a Random Number Generator (limited to a maximum score of 4) and the Bookmaker’s predictions. Football scores are often surprising, which is partly what makes the World Cup such an exciting tournament, so I was interested to see how the Random Number Generator performed. Bookmakers make money by being better at predicting results than bettors, and professional bettors make money by making even better predictions than the bookmakers, so I didn’t fancy my chances there.

I made a slow but steady start, working my way up the rankings during the group-stage matches and hovering around 8th place (out of 34) for the most part. By match 25 – Brazil vs Croatia – my model had overtaken the Random Number Generator (finally!) but I was nowhere near to some of my colleagues and continued to lag behind the Bookies who were consistently in the top 5.

But my model finally came into its own at the end of the group stage, predicting 4 of the last 8 scores exactly. Colombia’s 1 – 0 win over Senegal snuck me into 1st place, 12 points ahead of the Bookies and beating my colleague Ray by 1 point at the end of the group stages.

Whilst Ray was dismayed that I had leapfrogged him, I was dismayed that I hadn’t been betting on my results. Since the groups, the model has moved further ahead of the competition, although Ray could still catch me if things go horribly wrong this weekend.

The Hows

This isn’t a technical blog post, but I will explain a little about my approach and the magic behind the numbers.

The fundamentals are intuitive – what do you think the goal difference will be for Brazil vs Malta? 2-0? 4-0? And why do you think this? Some possible ideas:

  • Brazil are a historically renowned team; having won 5 world cup titles, the most of any nation
  • They’re also known for having offensive teams in which even their defenders score screamers – think Maicon 2010, and Roberto Carlos 1997
  • Malta aren’t exactly known for their football prowess (apologies to any Maltese readers if I have this wrong)

To build a model, you need to generate statistics (or to use the technical name: features) for each team that reflect your hypotheses. Taking the examples above, we could use:

  • Number of previous world cup titles
  • Average numbers of goals scored
  • The difference in FIFA ranking of each team

And plot these on one of the simplest models there is: the line of best fit, a.k.a. linear regression. Check it out below where I’ve plotted goal difference against the difference in FIFA ranking between two teams.

As expected, a large difference in FIFA ranking results in a higher goal difference (on average). However, you can see plenty of results where this is not the case (remember Germany losing 2-0 to Korea Republic?). This means that a model built on the FIFA ranking difference alone won’t do a great job.

So we need a more complex model, which can tease out what drives goal difference apart from the difference in FIFA ranking. In order to do this, we need data. I found two datasets online which I decided to use: 40,000 international football results (dating back to the 1800s!); and FIFA rankings dating back to 1993.

For every match in the dataset I created over 40 features – such as win rate in the last 2 yearsaverage goals scored per match in the last 5 years, etc. Features like these could identify upwards or downwards trajectories; and the difference in features between teams, which can give clues about the type of game expected – e.g. an exciting bout between two offensive teams, or a defensive slog between low scoring teams.

I was surprised to find that the difference in win rate in the last 2 years between each team accounted for 90% of the model’s performance, outperforming the difference in FIFA ranking.

So, I had a model which did a decent job of predicting goal difference. But how to turn that into scores? I ended up taking a fairly boring, but statistically rigorous, approach of calculating the probability of each score combination. And then allocating the most common score to the goal difference predicted by the model.

I estimated that my model would score an average of 1.34 points per match (remember – 5 points for the exact score, 2 points for the correct goal difference, and 1 point for just getting win/loss correct) which seemed very low to me at the time, but it turns out predicting football scores is harder than it seems . . .

The model is currently on an average of 1.30 points per match with 2 matches remaining (recovering from 1.17 after 40) and would have made a return on investment of 20% if betting on exact score outcomes which is easy to say with hindsight 😉 There are no current plans for a Liberis syndicate.

The Final Predictions

The model is not patriotic so I won’t say what it predicts for England’s 3rd place playoff against Belgium, however justice may be served in the final with a predicted 1-0 for France over Croatia.

Trusted by
Backed by