For whatever reason, nobody seems able to shake the nerds vs jocks thing when it comes to sports analytics.
Here for example is some blowhard (don’t even know his name, don’t care) going on about how the pythagorean theorem can’t help someone understand why a defensive line is bad.
No words. pic.twitter.com/btsaBZLrpf
— Alex Seixeiro (@alexfan590) September 21, 2017
I mean, this video’s pretty funny. It hits on all the relevant stereotypes of old school media people who clearly fear analytics’ encroachment on their way of talking about sports.
But it also represents a missed opportunity–why does it have to be sports analytics vs not sports analytics?
In another world for example, maybe this man would have a salient point. I have no idea what he’s talking about, but he seemed to glancingly mention a stat based on what a player does on their left side on a particular day. On the face of it, that does sound like something that may not be inherently useful to a football team to know, particularly in a sport with so few league games in a season (in case it isn’t obvious, I know practically nothing about gridiron).
But he’s not interested in making that point: he clearly thinks all use of data is prima facie bad. Maybe it didn’t have to be this way. As I’ve written before, I think there was a brief historical window when things could have been different, when we could have framed the divide as between the much-broader categories of evidence-based methods vs traditional methods.
Here, there are obvious opportunities for synthesis. Maybe, for example, the empirical evidence supports a lot of what clubs already do! Maybe it can help identify a few glaring errors, a few easy wins that don’t require hiring a full-time stats analyst from MIT.
Instead, we’ve decided to focus not on the idea of making decisions based on evidence itself, but one of the tools by which we collect and analyze that evidence: statistical analysis.
And I think this framing has greatly slowed progress, and not merely because of defensive blowhards like our friend up there, but also because there are some on the other side who believe there is no statistical model in sports that doesn’t have intrinsic worth, or that there is no aspect of decision-making in sports that cannot be improved on with more data.
Yet by focusing solely on analytics and not the idea that drives it—that it is better to make decisions based on evidence of what works and what doesn’t—we cut out from the discussion an entire managerial and operational mindset that goes far beyond whether or not a team decides to use a statistical model.
This is critical though, because the hope of the analytics movement is, to my perspective, has always been about far more than math. It’s about an umbrella of concepts, some of which involve the use of predictive modelling and better metrics, some of which involve addressing far more basic issues like community outreach, financial planning and a sensible hierarchy of leadership roles. And while yes, in an ideal world, the end result should lead to more wins, more success, more trophies, to me the real goal of evidence-based approaches in sports with a lot of variance is to provide stability.
Within most North American sports, stability is not a critical issue. It’s cool that the Houston Rockets are experimenting with new and exciting ways to run a sports franchise, but even if Daryl Morey was a traditionalist pinhead, the Rockets would likely get lucky now and then. In any case, they’re not going to be relegated from the NBA.
But in association football, particularly in the lower and non-league tables, stability is absolutely vital. It can offer clubs a platform to truly grow and experiment. It ensures that teams won’t waste the occasional windfalls that come with unlikely cup runs and the rest. It removes the existential sting that often accompanies relegation.
The objection here is that lower leagues can barely afford to stock the canteen, let alone pay someone to do data analysis. But that’s not even necessary; there is enough already in the public domain for an enterprising club chairperson to try out and maybe make a difficult job slightly easier, perhaps by establishing some rules surrounding recruitment based on age and quality of previous club, and working in some way to ensure a good fit for new managers, hatch contingency plans for relegations or even surprise promotions.
The F.A. might even consider drafting a document in consultation with a few analysts, forward-thinking coaches and directors, technical recruiters and consultants on instilling some basic best practices for running a smaller club, to help ensure some sort of baseline stability. The best practices may or may not involve the use of data analysis, but they should be based on evidence of what works best in what situation.
Once you’ve established this as a basic foundation—one that provides stronger insurance against the kind of blunders that can set back clubs for years, even decades—they can get back to the traditional approach to winning, via effective tactics, strong management and a little bit of luck.
That can’t happen though if we continue to talk about analytics and not about evidence.