Modern Fitba's Expected Goal Guide - Part 2
Written by: Christian Wulff @ahellofabeating
In the first part of this guide to Expected Goals (xG to her pals), we looked at what it is, how we at Modern Fitba calculate it, and why we believe our model is the one best adapted to the SPFL Premiership.
But why do so many analysts champion the use of xG?
Quite simply, compared to other stats xG it is highly likely to give a more accurate picture of current performances and it’s better at predicting future performances, especially over smaller samples. For an in-depth look at this, I highly recommend reading these seminal pieces by Sander of 11Tegen11 and Michael Caley
(Note: when we refer to xG illustrating ‘performances’ of a team, we essentially define this as a team’s ability to create and restrict chances)
As with all stats, the bigger the sample, the more confidence we can have in its accuracy.
In a small sample of games what we label ‘random variance’ (pure luck, a great goalkeeping performance, a striker on a hot streak, ref having a shocker, small margins etc.) is likely to play a big part in determining the result of a game. However, the bigger the sample, the smaller the role such factors will play – the ‘underlying’ quality of a team and player will come through and such ‘random variance’ will gradually lose its importance.
xG is better at picking up these underlying performances in smaller samples compared to looking purely at shots or goals scored. We know that xG is likely to be more accurate over smaller samples partly because how well it aligns with goals over a bigger sample.
The graph below show the difference in percentage between the goals scored and the expected goals achieved in the SPFL premiership since the start of the 2016 season.
Over the first 50 games, the difference between the amount of goals scores and expected goals fluctuates, with a difference of over 20% even after 40 games. However, at around 60 games the difference stabilises at around 10%, before it gradually gets smaller. At the end of the 2017/18 season, the difference is only 1.9% between the xG and the total amount of goal scored. With another season of the data, we expect it to get even closer.
This illustrates the point above; over a shorter sample of games, the amount of goals scored might not reflect the quality of chances that was actually created and conceded (measured as xG).
This is why analysts sometime make what can seem like a contradictory argument; even if a team is getting bad results, their performances (i.e. the chances they create and concede) might actually not be that bad. Similarly, if a team is going through a patch of conceding very few goals, their defensive performance might not actually be as good as it seems (such as this example with Hearts from last season).
The way analysts often phrase this is whether a team’s results or a striker’s scoring rate is sustainable or not. If a striker is scoring a lot of goals, but his expected goals tally is considerably smaller, flashing red lights should go off all over the place: this is a hot streak and the amount of goals he's scoring is not sustainable. A striker who seemingly can’t score from anything but has an expected goal value much higher than his actual goals? Relax; he’ll highly likely start scoring several goals soon.
We call this ‘regressing to the mean’; sooner or later the output (goals scored/conceded) will move towards what the underlying numbers says, i.e. the average rate of scoring/conceding from these type of chances (as measured by xG).
It is one of the fundamental rules of xG analysis: if a strikers scoring rate seems too good to be true, it probably is. (James Yorke article on Joshua King’s ‘hot foot’ in his fantastic 16/17 season is a very good example)
As mentioned in the first part of this series, even the best strikers don’t have many more goals than expected goals – they usually score a lot of goals because they get to a lot of chances (this article by Bobby Gardiner on Raheem Sterling is an essential read on the subject).
There could of course be instances where you have very good or very bad finishers who will consistently over or underperform in relation to their xG. However, these cases are a lot more seldom than you might imagine, and no solid conclusions should be made on a striker’s finishing ability using xG unless you have multiple seasons worth of data. Matt Rhein made this point in his article on Alfredo Morelos for Modern Fitba earlier this year; while there are some warning signs around the Colombian’s finishing, there is simply not yet the sufficient data to draw any definite conclusions.
Overall, what you want from a striker is that they produce a high xG value; if the goals aren’t coming just yet, it is highly likely that they will. Similarly, if your striker has a scoring rate high above his expected goals number, I regret to inform you that he won’t keep scoring that many goals for much longer.
On a team level, Partick Thistle is a great example of what the ‘underlying numbers’ such as xG can tell us about a team’s performance and what is likely to happen in the future.
On the 9th December 2016, Thistle were dead last in the Premiership with 14 points from 17 games, having just lost 4-1 at home to Celtic. The league was still tight, with only 4 points up to Kilmarnock in sixth, but it looked ominous. However, the picture painted by expected goals was a lot rosier; Partick were 5th when it came to the difference between xG created and xG conceded, not too far off Aberdeen in 4th. As long as they continued to put in the same type of performances, they would be fine. Thistle results soon caught up with their underlying stats, and they ended up finishing sixth after an impressive end to the season.
In the autumn of 2017, the story was similar, yet different. Thistle was again languishing towards the end of the table, but the warning signs were already flashing red in the analytics community; as early as the middle of October, Matt Rhein of this parish wrote an article for The Two Point One website (paywall) pointing out that Thistle’s underlying numbers were as bad as their result. This wasn’t a case of ‘stay patience, keep doing what you doing and the results will come’, rather it was a confirmation of ‘yeah, you should probably start panicking now’. And so it proved: Thistle was relegated only a year after one of their best ever league seasons.
Another good example of xG’s predictive powers from last season, but also how other factors will influence such predictions, is Kilmarnock. Last season, after 8 games and only 3 points, not many Killie fans were upset when Lee McCulloch left, the team propping up the league table. However, as I pointed out at the time, McCulloch was entitled to feel quite hard done by, as Kilmarnock had the 6th best xG stats at the time of his departure. This meant that even if the performances stayed the same, the results under McCulloch were likely to improve.
Note the phrase ‘if performances stay the same’. This is often how analysts will frame a prediction about future performances (and results) based on underlying stats such as xG. It’s a different way of saying that if a team keeps creating and conceding chances at this current rate, this is what is likely to happened to their upcoming results; they either look sustainable, i.e. they match the underlying performances, or the results will soon chance ('regress to the mean').
There are of course many other factors that may impact the performance itself and the underlying stats we use to measure them. Kilmarnock is a very good example; before McCulloch left the team was averaging an xG difference of -0.31 per game played (i.e. on average they were creating slightly less quality chances than they were conceding). If this rate had been replicated throughout the whole season (i.e. if performances stayed the same), they would have ended up with the 9th best xG difference in the SPFL. Not a great season, but likely to end up a fair bit better than it had started.
With Steve Clarke and quality players such as Youssouf Mulumbu coming in, not only did Kilmarnock’s results improve, so did their underlying numbers – they achieved a positive 0.31 xG difference on average per game under Clarke up until the league split. Replicated throughout season, their xG difference was the 5th best in the league.
Such insight into the underlying stats could be crucial for a club when they are considering if and when to make a managerial change. If a club wanted to persist with a manager with bad results, they might find encouragement to do so if the underlying stats were decent, as results are likely to improve even without an improvement in performance.
However, it might also be the perfect time to make the change if the board have their doubts about the current manager: bringing in somebody else when results are bad but performances better, is likely to see an upswing in result for the new manager even if performances stay the same. The new appointment gets a good start, and the board is praised for their clever change.
At the same time, even if the board is convinced that a change is needed for the long-term benefit of the club, changing the manager if results are good but the underlying stats are worse might not be advisable. Even if the performances stayed the same under the new appointment, the results would likely suffer, putting pressure on the new manager and allowing pundits to proclaim what a strange decision this has been by the board. Insight into the underlying stats would help the board time such changes to give the new manager a better likelihood of a good start.
Expected Goals is a metric that is still being refined and which is set to undergo more changes over the next few years. The next step of its evolution is likely to be around including more non-attempt data; even if an attack doesn’t result in a shot or header it might well have been a goal scoring opportunity worth noting. As explained in the first part of this series, our model include a small degree of this, and it’s something set to become a bigger part of future models.
Over these two articles I’ve tried to set out what Expected Goals is and why we use it:
Expected Goal measures the quality of a chance.
It gives the quality of a chance a value of between 0 and 1 xG by looking at how many times a similar chance has been scored in the past (if it has been scored 40% of the time, the xG value is 0.4)
We use xG not because it is flawless, but because it is better than anything else (at this moment).
xG is a better measurement of past performances than goals and shots (especially over smaller samples) and is also a better predictor of future performances.
Expected Goals and advanced statistics are set to become a bigger part of how clubs analyse data, how the media report on the game and how fans discuss football. If that sounds terrible to you, take comfort in the fact that xG and other metrics is not actually something new; it is simply a way of trying to measure and quantify ideas and knowledge about football that has been part of the game since its start.