Monday, January 3, 2011

Process v. Results, Analytics and Teaching

On my agenda today is reviewing video of two matches played against our upcoming opponents in the FA Cup this weekend, Rochedale. Match analysis is one of the things I do rather well: I pick up patterns of play, see exploitable weakness, fixable problems, strengths to be reinforced. However, it has got me thinking a little more broadly about analysis, and the application of statistical data in the game, and even more broadly about the process of teaching, and its eventual relationship to results.

It is a general truism in any sport that the team with the best players usually wins. In cases where that doesn't prove true, the "lesser" team is better prepared, better organized, and yes, lucky. However, that luck, on reflection and research, is usually a planned event: the ball was played into an area the team felt they had a slight edge, and the opportunity was not missed.

But then, is then the team that was a little smarter, a little better prepared, a little more cohesive, then not filled with "better players?"  The best example from this past year was Germany's thrashing of Argentina in the World Cup.  Argentina featured an in-form Higuain, Messi, Tevez, DiMaria, while the closest Germany really had to a glamour player was Ozil.  Yet the Germans comprehensively out-worked, out-numbered, out-thought, and out-played the Argentines.

In analysis, and particular statistic analysis, we look for meaningful data -- something that is variable, and thus can be changed, and will improve the likelihood of a positive result.  However, such data is hard to find -- shots per game?  Well, a forward who scores once per match but is shooting twelve times might be wasting possession, and his team ends up on a lot of 3-1 defeats.  Passing efficiency?  A midfielder who is completing 95% of their passes probably isn't getting the ball into dangerous areas. Xavi Hernandez is usually in the neighborhood of only 75-80%, however the passes that he fails to complete are generally into dangerous areas; perhaps only once a match does he surrender the ball in his own half.  Virtually all data sets that you can gather about football have only conditional meaning: the data is only probative in relation to other data sets.

We teach techniques, insights, applications of the principles of play to make "the best players."  Along the way, we tell ourselves the results are not necessarily indicative of developing those players.  We have all seen the stellar U12 teams of a big kid up top who gets balls played over the top and bullies the game.  Oftimes, the turnover of the squad by U16 is 80% and the big kid is frustrated and "burnt-out" by 15 because he has no strategy other than kick and chase. 

And yet -- is there an analytical yardstick that we can see that the behaviours we teach will ultimately result in good play?  Of course, at the end of the day, that is measured by goals scored and matches won.  Or is the game so complex, with so many variables of position, communication, time and space, and the influence of players qualities that all data is either too general to be useful, or so specific that the data sample is too small to be meaningful?

We all know good play, a good player, and a good team when we see it.  Or at least we think we do.  Do we?  Or is football management all alchemy? We are tossing the ingredients into the pot, and maybe Barcelona 2010 comes out or maybe it's the French national team of 2002.

I think it's more art than science -- that it is alchemy, but then again I have to wonder if maybe the great managers, the Cloughs, Busbys, Fergusons, Shankleys, didn't see discrete, quantifiable, analyzable (if that's a word) moments in the game, qualities in players, and even if they didn't consciously understand, knew from their insight, knowledge, and just sheer brilliance exactly how it would come together to create magic on the pitch. 

(and trophies in the cabinetry .  . .)