How important is the starting line-up when predicting games?

As I mentioned when doing the betting backtest for my Expected Goals model, my Monte Carlo game simulation is done on player level to account for missing players, which in theory would affect the game a lot. The simulation involves a very simple prediction of the starting line-up for each team in each game – but how would the backtest result look if I somehow could look into the future and actually know which players would be starting the game?

To test this I’ve simulated every game from the 2015 Allsvenskan season again, using my second model with more heavily weighted home field advantage – but this time used the actual line-ups instead of having the model guess. For the backtest I’ve again used odds from Pinnacle and Matchbook, but won’t bore you with the results from both as they’re much the same. Here’s the model’s results betting at Matchbook:

lineup_01lineup_02

As expected, knowing the correct line-up really boosts the model’s predictions, as it now makes a profit pretty much across the board. Just like with the previous backtests, the 1X2 market looks ridiculously profitable as the model is very good at finding value in underdogs.

Let’s compare the results with that from Model 2:

lineup_03

The numbers in this table represent the net difference in results for the two models. In general, Model 3 makes fewer bets at lower odds, but has a much higher win percentage – hence the bigger profit. Remember, the only difference between these models is that Model 3 uses the actual line-up for each game, while Model 2 have to guess.

So could these results be used to develop a betting strategy? Using the actual line-ups for the simulation, the opening odds are of course not available to bet on since they are often posted a week or so before each game while the line-ups are released only an hour before kick-off. But as the game simulation only takes about a minute per game, it’s certainly possible to wait for the line-ups to be released before doing the simulation and then bet whatever the model deem as value.

How important is the starting line-up when predicting games?

World Premiere(?): Expected Goals for Finland’s Veikkausliiga

A while back I stumbled upon shot location data for Finland’s top league, Veikkausliiga. I haven’t seen an Expected Goals model for this league before so despite having no interest in or knowledge of the league, I decided to develop a model for it based on my Expected Goals model of Swedish football. My idea is that a model could be a very useful tool and make a big difference when betting these smaller, lesser-known leagues.

Unfortunately only one season of data is available and like with the Swedish data no distinction is made between shot types beside penalties. But the overall quality seems to be of a higher standard than it’s Swedish counterpart and the data also contains more detailed player metrics like number of accurate passes, fouls, turnovers, etc., which might prove useful in the future.

Model results

FIN_01

First off I’ve tested if the Finnish data is significantly different from that in my Swedish model. It turns out it is, but as one season of data is probably not enough to develop a decent model, I’ve opted to add the new data to my existing model and use it for Veikkausliiga. No Finnish data will be used when dealing with Swedish games however.

Let’s look at some plots of how the model rates the teams and players in Veikkausliiga:FIN_02FIN_03FIN_04FIN_05

Data from the Swedish leagues is colored red and not included in the regressions.

FIN_06

What we can see is that the r-squared for xG/G are worryingly lower than the Swedish model’s 0.61. Also, the model does a better job explaining team defence than attack, just like the Swedish model. Why that is I don’t know.

The model rates HJK as the best team in terms of both xG and xG against but they only finished third – albeit just two points below champions SJK, who seem to be over performing massively with their goal difference about 13 goals higher than expected.

At the bottom of the table, KTP seem to have over performed while demoted Jaro under performed. Mariehamn also seemingly under performed both in attack and defence.

FIN_07
FIN_08Looking at individual players, I’d say the model performs well with an r-squared of 0.8, similar to that of the Swedish model. RoPS’ Kokko had the highest xG numbers to go with his title as top scorer, and interestingly all players in the top 10 in goals outscored their xG numbers.

Betting backtest

While the model doesn’t seem to be as good as my Swedish model, I still think it’s reasonably good considering only one season of data from the league is used. But what about its performance on the betting market?

Just like I did with Allsvenskan, I’ve simulated each game using my Monte Carlo method for game simulation. Obviously only using data available prior to each game, my method rely heavily on long-term team and player performance and my initial guess was that using it for the 2015 Veikkausliiga wouldn’t be profitable since there’s not enough data. Well, let’s see.

backtest_09

Running the backtest my suspicion immediately proved right, as can be seen on the above plot. The model looks like a clear loser, and setting a minimum EV when betting doesn’t seem to change that. But looking at the plot, there’s actually a point late in the season where the model start to perform better.

Since the model was at a huge disadvantage from the start with so little data (the Allsvenskan backtest used four seasons of data), I’ll allow myself to do some cherry picking. Here’s how the model performs betting Pinnacle’s odds after the international break in September:

backtest_10

backtest_11backtest_12

Just like before, Model 2 is just a variation of my Monte Carlo game simulation where home field advantage is weighted heavier. Like with Allsvenskan, both models seem to focus on underdogs and higher odds. What is encouraging is that this time only a minimum EV threshold of 5% is needed to single out a reasonable number of bets. In my backtesting of Allsvenskan a threshold of 50% was needed, indicating that the model probably was skewed in some way.

Like in the Allsvenskan backtesting the model makes a killing on the 1X2 market due to its ability to sniff out underdogs. There’s also some profit to be made on Asian Handicaps while only Model 1 makes a profit betting Over/Unders.

I’ve also run the backtest against Matchbook’s odds, but while I won’t bore you with more plots and tables, what I can say is that the results again match up to my findings from the Allsvenskan backtesting. At Matchbook, betting the 1X2 market is still hugely profitable, the Asian Handicaps close in on odds around even money while Over/Unders perform better, albeit only on closing odds.

Conclusion

As expected, betting on Veikkausliiga from the start of the season would’ve proved a dismal affair. This is understandable since my method rely so heavily on long-term performance and using only a couple of games for assessing player and team quality isn’t a good idea.

But the model did seem to perform better late in the season, and while this probably isn’t enough for me to use it for betting on the upcoming 2016 Veikkausliiga season, I’ll keep my eyes on its performance against the market and maybe jump in when it seems to be more stable.

 

World Premiere(?): Expected Goals for Finland’s Veikkausliiga

Putting the model to the test: Game simulation and Expected Goals vs. the betting market

With the regular Allsvenskan season and qualification play-off both being over months ago, instead of doing a season summary (fotbollssiffor and stryktipset i sista stund have already done that perfectly fine), I thought I’d see how my model has been performing on the betting market this season. Since my interest in football analytics comes mainly from its use in betting, this is the best test of a model for me. Though I usually don’t bet on Allsvenskan, if the model can beat the market, I’m interested.

Game simulation

To do this, I should first say a few things about how I simulate games. I want my simulations to resemble whatever they are supposed to model as much as possible, and because of this I’ve chosen not to use a poisson regression model or anything remotely like that. Instead I’ve build my own Monte Carlo game simulation in order to emulate a real football game as close as possible.

I won’t go into any details about exactly how the simulations is done, but the main steps include:

  • Weighting the data for both sides to account for home field advantage.
  • Predict starting lineups for each team using their most recent lineup, minutes played and known unavailable players.
  • Simulate a number of shots for each player, based on his shots numbers and the attacking and defensive characteristics of both teams.
  • Simulate an xG value for each shot, based on the player’s xG numbers and attacking/defensive characteristics of both teams.
  • Given these xG values, the outcome of the shot is then simulated and any goals are recorded.

Each game is simulated 10,000 times, obviously based only on data available prior to that particular game.

The biggest advantage of this approach is that it’s easy to account for missing players, it is in fact done automatically. It also seems more straightforward and easily understood than other methods, at least to me. Another big plus is that it’s fairly easy to modify the Monte Carlo algorithm in order to try new things and incorporate different data. The drawbacks include the time it takes to simulate each game. At 10,000 simulations per game it takes about a minute, meaning that simulating a full 240-game Allsvenskan season would take at least 4 hours. Also, since my simulations rely heavily on up-to-date squad info, such a database have to be maintained but this can be automated if you know were to look for the data.

For each game, the end results of all these simulations is a set of probabilites for each possible (and impossible!?) result, which can then be used to calculate win percentages and fair odds for any bet on the 1X2, Asian Handicap and Over/Under markets.

As an example of how the end result of the simulation looks, I’ve simulated a fictive Stockholm Twin Derby game, Djurgården vs. AIK. Here’s how my model would predict this game if it were to be played today (using last season’s squads, I haven’t accounted for new signings and players leaving yet):

game_sim_01

Given these numbers the fair odds for the 1X2 market would be about 2.31-3.62-3.44 while the Asian Handicap would be set at Djurgården -0.25 with fair odds at about 1.99-2.01 for the home and away sides respectively. The total would be set at 2.25 goals, with fair odds for Over/Under at about 2.04-1.96.

Backtesting against the market

With my odds history database containing odds from over 50 bookmakers and the fact that timing and exploiting odds movements is a big part of a successful betting strategy, it’s not a simple task to backtest a model over a full season properly. I’ve however tried to make it as easy as possible and set out some rules for the backtesting:

  • The backtest is based on 1X2, Asian Handicap and Over/Under markets.
  • Only odds from leading bookmaker Pinnacle and betting exchange Matchbook is used. Maybe I’ll run the backtest against every available bookmaker in order to find out which is best/worst at setting its lines for a later post.
  • Two variations of the Monte Carlo match simulation is tested, where Model 2 weights home field advantage more heavily.
  • Only opening and closing odds are used in an attempt at simulating a simple, repeatable betting strategy.
  • For simplicity, the stake of each bet is 1 unit.
  • Since my model seems to disagree quite strongly with the bookies on almost every single game, there seems to exist high-value bets suspiciously often. To get the number of bets down to a plausible level, I’ve applied a minimum Expected Value threshold of 0.5. As EV this high is usually only seen in big underdogs, this may be an indicator that my model is good at finding these kind of bets, or that it is completely useless.

So lets’s take a look the results of the backtest – first off we have the bookmaker Pinnacle. Here’s the results plotted over time:

Pinnaclebacktest_01

backtest_02

 

backtest_03

We can immediately see from the results table that the model indeed focuses on underdogs and higher odds. Set against Pinnacle, both variations of the model seems to be profitable on the 1X2 market, with Model 2 (with more weight on home field advantage) performing better with a massive 1.448 ROI.

Both models recorded a loss on the Asian Handicap market and only Model 1 made a profit in the Over/Unders – a disappointment as these are the markets I mostly bet on.

The table above contains bets on both opening and closing odds – let’s seperate the two and see what we can learn:

backtest_04

Looking at these numbers we see that both models perform slightly better against the closing odds on the 1X2 market, while Model 2 actually made a tiny profit against the closing AH odds. We can also see that Model 1’s profit on Over/Unders came mostly from opening odds.

But what about the different outcomes to bet on? Let’s complicate things further:

backtest_05

So what can we learn from this ridiculous table? Well, the profit in the 1X2 market comes mainly from betting away teams which suits the notion that the model is good at picking out highly underestimated underdogs. Contrary to the 1X2 market, betting home sides on the Asian Handicap markets seems more profitable than away sides. Lastly the model has been more profitable betting overs than unders.

As we’ve seen, my model seems to be good at finding underdogs which are underestimated, and that at Pinnacle, this bias mostly exist in the 1X2 market, hence the huge profit.

Matchbook

But what about the betting exchange Matchbook, where you actually bet directly against other gamblers?backtest_06backtest_07backtest_08

The 1X2 market seems to be highly profitable at Matchbook too, and Model 1 actually made a nice profit on AH, especially away sides – in contrast to the results at Pinnacle. Also, the mean odds here are centered around even money. Over/Unders again seems to be a lost cause for my model.

Conclusion

As I’ve mentioned, the model seems best at finding underdogs and high odds which are just too highly priced, and looking at the time plots we can see that these bets occur mostly in the opening months of the season. This may be and indicator of how the market after some time adjusts to surprise teams like this season’s Norrköping.

For a deeper analysis of the backtest I could have looked at how results differed for minus vs. plus handicaps on the AH market, and high vs. low O/U lines. Using different minimum EV thresholds would certainly change things and different staking plans like Kelly could also have been included, but I left it all out as to not overcomplicate things too much.

I feel I should emphasize that the different conclusions made concerning betting strategy from this backtest only applies to my model, and not Allsvenskan or football betting in general.

As we’ve seen, an Expected Goals model and Monte Carlo match simulation can indeed be used to profit on Allsvenskan. However, the result of any betting strategy depend highly on not only the model, but also when, where and what you bet on.

Putting the model to the test: Game simulation and Expected Goals vs. the betting market

Preview: Allsvenskan Qualification Play-off

With the regular Swedish season being over and Norrköping crowned champions, all that’s left now is to decide who’ll get the last spot in next years Allsvenskan. In this qualification play-off, Sirius finishing 3rd in Superettan is pitted against Allsvenskan’s 14th placed Falkenberg in a two-game battle.

Let’s have a look at some stats for the teams, compared to both the teams in Allsvenskan (blue) and Superettan (red):

play_off_01

From this graph, Sirius actually look really good with especially a strong defensive, even when compared to the Allsvenskan teams, while Falkenberg’s defence looks really poor. However, this doesn’t say much about how the teams compare to each other since Falkenberg has had to face far tougher opponents in Allsvenskan.

play_off_02 play_off_03

Looking at the xG maps what again stands out is the defensive performances of the teams. While Falkenberg have conceded a massive 415 shots, almost 14 per game, Sirius have only conceded 241 shots or about 8 per game. Not only that, Sirius’ xG per conceded attempt is 0.111 while Falkenberg’s is a staggering 0.154, meaning they concede shots in quite bad (for them) situations – not a good thing.
play_off_04Looking at individual players we can se how Sirius’ Stefan Silva is the big overperformer here with his 12 goals almost doubling his xG numbers. Also, Falkenberg seem to have more goalscoring options with three players over 6 goals while Sirius only have Silva.

As always, I’m not willing to present any prediction for individual games, but here I had hoped to show the results of a simulation covering both play-off games including possible extra time and penalty shoot-out. I have run such an simulation, however I’m not happy with the results as my model seems to be favouring Sirius too heavily. This is almost certainly due to the different leagues involved, making Sirius look way better than they would be against an Allsvenskan side. Since I only came up with writing this post this morning, I haven’t had the time to look into a possible league strength variable to use in the simulation.

But if I had to guess, I’d say that Sirius looks like a real strong side and should possibly be considered favourites for promototion here, mostly due to Falkenberg’s nasty habit of conceding a lot of shots with high goal expectancies.

Preview: Allsvenskan Qualification Play-off

Predicting the final Allsvenskan table

With the Swedish season soon coming to an end it’s a good time to try out how the Expected Goals model will predict the final table. With only three games left a top trio consisting of this season’s big surprise Norrköping just in front of Göteborg and AIK are competing for the title as Swedish Champion. At the opposite end of the table Åtvidaberg, Halmstad and Falkenberg look pretty stuck, with the two latter teams battling it out for the possible salvation of the 14th place relegation play-off spot.

predict_table_01

Let’s take look at the remaining schedule for the top three teams:

Norrköping have two though away games left against Elfsborg and Malmö, who are both locked in a duel for the 4th place which could potentially mean a place in the Europa League qualification. Elfsborg are probably the tougher opponent here, with reigning champions Malmö busy in the Champions League group stage. Between these two away games Norrköping will play at home against Halmstad who are fighting for survival in the bottom of the table.

Göteborg have two though away games themselves, first off at Djurgården and later a very important game against fellow title contenders AIK. This game will probably decide which of the two will challenge Norrköping for the title in the last round. Göteborg finishes the season at home to Kalmar who could possibly play for their survival in this last game.

AIK have the best remaining schedule of the three top teams, with away games at Halmstad and Örebro on either side of the crucial home game against Göteborg. As mentioned, Halmstad is fighting for their existence in Allsvenskan, while Örebro’s recent great form have seen them through to a safe spot in the table.

At this late stage of the season there are a lot of psychological factors in play, with the motivation and spirit of teams and players often being connected to their position in the table. These aspects are very hard to quantify and have not been incorporated in my model. So my prediction of the table rely solely on my Expected Goals model used in Monte Carlo simulation. I won’t reveal exactly how I simulate games but the subject will probably be touched upon in a later post so I’ll spare you any boring technical details for now.

Each of the remaining 24 individual games have been simulated 10,000 times. For each of these fictional seasons I’ve counted up the points, goals scored and goal differences for every team to come up with a final table for that season. Lastly I’ve combined all these seasons into a table with expected points and probabilities of each teams possible league positions.

predict_table_03

The model clearly ranks Norrköping as the most likely winner with Göteborg as the main contender, while AIK’s chances of winning the title is only at about 18%. The bottom three looks rather fixed in their current positions with Falkenberg having only a 2% chance of overtaking Kalmar in the last safe spot in the table. At mid-table things are still quite open, even though Djurgården’s season is pretty much over with a 89% chance of placing 6th. Malmö seem to have an advantage against Elfsborg in the race for the 4th place, but given their Champions League schedule their chances should probably be less than the model predicts.

I’ll probably be posting updated predictions on my twitter feed after each of the top teams remaining games to see how the results change the predictions.

Predicting the final Allsvenskan table

The Model part 1 – Exploring Expected Goals

In a series of posts I will be covering the work done on and with my model on Swedish football. In this first part of the series I’ll talk about the underlying concept upon which the model is built – Expected Goals.

We’ve all seen those games were the result ended up being extremely unfair given how the game played out. Maybe the dominant team had a spell of bad luck and conceded an own goal while missing their clear chances, or the opposing goalkeeper played the game of his career making some huge saves, or maybe the lesser side luckily managed to score through their only real chance. All these scenarios point to the same thing – there’s a lot of randomness associated with goals. We often see teams playing great and still lose while a poorly playing side take home all three points.

Because of this random nature of football, only looking at results and goals scored and conceded is not a good way to assess true team and player strength. Sure, good teams usually win but they also sometimes run into spells of bad form and perform worse, while bad team sometimes goes on a good run, securing that last safe spot in the table just in time before the season ends.

To combat this problem, the football analytics community has turned its eyes to a more stable part of the game – shots – in hope that these will exhibit less randomness and hold more explaining power. While it is certainly true that examining how many shots a team produce and concede can tell you more than just goals, the same problem with randomness exists here too. Good teams usually take more shots than they concede but as we all know, this is not always the case.

Expected Goals aims at getting down to the core of why good teams perform well and bad teams perform worse, and in the process avoid some of the problems associated with just summing up goals and shots. It is based on the notion that good teams takes more shots in good situations while bad teams do the opposite. The same is true in defence, as good teams avoid conceding more shots in good situations than bad ones.  The hope is that these characteristics will be less random and more useful in explaining and predicting football.

In its essence, Expected Goals gives you a value of how often a typical shot ended up in the net, and this is done by examining huge datasets in a number of different ways. Usually an Expected Goals model is based on where on the pitch the shot was taken and the reason for this is quite clear once you come to think about it – it all comes down to shot quality. Imagine two different scoring opportunities, the first being 25 meters out from the goal and the other being 5 meters from the goal. In traditional football reporting these two shots will be treated just the same, but we all know that the latter is preferable since it is closer to goal and probably an easier shot to make.

Given the different methods, ideas and datasets football analysts work with, there’s no right way to calculate an Expected Goals or xG value. For example, an ambitious analyst might account for not only where the shot was taken, but also what type of shot it was, what kind of pass preceded the shot, if the player dribbled before taking the shot etc. The possibilities are only limited by the data, and with the likes of Opta covering the top European leagues, these are vast.

Let’s take a look at real example. In my database (more on that in later a post) I have 243 penalties recorded, of which 192 ended up in goal. To get the xG value for a penalty we just need to calculate the fraction of penalties which turned into goals, in this case 192/243, or about 0.79. In comparison, the xG value for a shot taken from the same penalty spot during regular play is estimated by my model to be about 0.25, which makes sense since it’s a harder shot than the penalty.

As shown by several football analysts (for example on the blog 11tegen11), Expected Goals hold some real power at explaining football results. But it also has its weaknesses. There’s currently no way of accounting for the position of the defenders when the shot was taken, which surely would effect scoring expectation. Furthermore Expected Goals only deals with actual shots taken but as we all know, not all scoring chances produces a shot. It’s also true that xG values are averages, meaning that there’s actually a whole range of different expectations for different players. Surely Leo Messi will have a higher chance to score than Carlton Cole in nearly every situation.

To me, the real strength of Expected Goals lie in that we can treat it as a probability and use it in simulations in order to examine more complex situations. Take a look at the penalty for example. With an xG value of 0.79, we can expect an average player to score most of the times, but he’ll also miss some shots. In fact, it’s not uncommon for him to miss several shots in a row. With the help of Monte Carlo simulation (again, more on that in later posts), we can examine the nature of the penalty shot more closely. Let’s say we get our player to take 10,000 penalty shots in a row. How many will he make?

Pen_sim_01

As we can see our player started out by making his first shot only to quickly drop below the expected 79% scoring rate, but as he took more and more shots he slowly moved towards his expected scoring rate. He actually scored 7928 penalties which is very close to the expected 7900.

Let’s try a more complex simulation just for fun. Imagine a penalty shoot-out. How likely is it to make all five shots? Four out of five? My database doesn’t contain any penalty shoot-outs but my guess is that these are converted on a slightly lesser scale than regular penalties, either due to the stress involved or maybe fatigue. But let’s use our standard xG value of 0.79 for simplicity. Let’s simulate 10,000 shoot-outs with five penalties each.

Pen_sim_02

Given the conditions we’ve set up, it seems there’s about a 30% chance to score all five penalties while making four is the most likely outcome. Going goalless from this shoot-out looks rather unlikely but as I’ve said the true chance of scoring after playing 120 minutes and with the hopes of thousands (or millions) of people on your shoulders is probably lower so a goalless shoot-out is probably more likely than our simulation shows.

That’s it about Expected Goals for now, and as I’ve said we will explore the possibilities of Monte Carlo simulation more thoroughly later. In my next post about my model I’ll talk about the data used for building it.

The Model part 1 – Exploring Expected Goals