I actually like these graphs and I do find most of the analytics useful (well, a lot of it, actually - some stuff is utter nonsense like those hero charts).
I just wish there was an effective way for counting stats to paint a better picture of the subjective events on the ice, rather than simply counting them equally. I do think there is an attempt to get there, but it seems far from there right now.
I feel like an old tired stubborn mule, but I do think that Hartley's system on the Flames at the time was 'sustainable' simply due to that it was a great system implemented for the talent level of the team at the time. 1.5 strong seasons' worth, anyway. I also find that Gulutzan's system was a terrible fit and that the Flames' success was in large measure also 'sustainable' under that system (or, more precisely, lack of success).
Hartley's system relied on creating high-chance 'red zone' attempts for, while trying to contest as many shots against. When looking at HOW that team was built, one could argue that it had 2 good defencemen (Giordano just really breaking into a legitimate #1 D, and Brodie also emerging) with a Dennis Wideman being relied on to play 25+ minutes a game when Gio went down for a very long stretch...
I honestly do think that some systems can sustainably 'beat' counting stats, and some systems sustainably disappoint even if the counting stats show "should be better".
I do think that if some of you analytic-minded folks get your heads together and reliably assign values to such things as 'contested' vs 'uncontested' shots, and 'goalie having to go side-to-side to make a huge save', maybe a '5 fricken jam the net shots'... and include these with the current analytics, I do think that it could provide stronger correlation.
Some nights the counting stats will just be off (and sometimes way off) simply due to a goalie playing out of his mind. Those will always be outliers.
What I hated last season was the constant "Flames are on the verge" talk when the eye test told me that they weren't being 'unlucky', but rather that their chances for (including the high danger chances) were not of great quality. Even when these high danger chances for last season was way in Calgary's favour, I argued that the chances against were of even better quality (uncontested, sometimes cross-crease tap-ins where Smith had to make a crazy save to stop, etc), even if they were lower in volume.
That's always the toughest part of objective analytics though, isn't it? Assigning consistent and reliable values to things that are subjective. Heck, something that constitutes a hit varies wildly from arena to arena, and even things like shots were not universal from arena to arena, right?
I do think that stats can and often do correlate really nicely with the data that you are seeing, but I also think that there are examples when they are proven rather unreliable consistently due to different systems employed by teams with differing amounts of talent.
I can't say that I have disagreed with any of the games that these charts are representing yet that Bingo has been posting. It has - for the most part - been matching my eye test. Even if they weren't 100% matching, I am sure at least some of the individual columns would be describing what I am seeing well enough.
Know what this tells me? That the Flames are a team with solid talent playing a system well (for the most part, ignoring a couple of obvious games in which they didn't play well), and that this system is more conducive to creating higher (real?) danger chances for which is helping align the 'what you see is matching what the analytics is saying'. Just my guess anyway.
|