Friday, 1 June 2012

Learning from artificial cases?

As part of a more general training programme I took part in a session by Miles Peacocke of Real Results (a colleague of whom conducted another session I’ve previously remarked on). It involved an exercise, or game, the rules of which I will not describe as not knowing about it in advance seems to be an important element of it (that seems true even if the purpose of the exercise remains opaque to me) and colleagues may well encounter it themselves. We were told that it involved co-operation and it began with the distribution of a set of cards containing a variety of forms of content though until I’d seen the cards it wasn’t clear to me whether it really was wholly cooperative. I hope I don’t give too much away to say it involved acting on and ordering the cards under time pressure.

As it turned out, we – members of UCLan’s research active staff – successfully completed it in good time (much to Miles’ apparent surprise) without the imposition of any top down structure. Suggestions were made – I made some myself – but no one exercised any great authority and we collectively allowed the necessary order to evolve out of initial disorder through a variety of distinct local strategies. Not everybody did the same thing with their cards, for example. But it seemed clear to me, early on, that this would not matter (though in other games it might).

Afterwards, Miles asked what we’d done well, what we’d improve and what we thought would change if we did it again. And at this point I realised how difficult it is to learn anything general from such commercial training exercises. In this particular game, a bottom up evolution was a safe strategy insofar as no one started off in a better epistemic state. Best therefore to hear all suggestions. But not all the games in his little box might have worked that way. Nor was that something that we had ‘done’ in the sense of an intentional shared action: that would require exactly the kind of prior view we lacked. That it had happened was retrospectively a good thing but it might not in other cases. So it made little sense to resolve to act in the same way or differently for an arbitrary choice of game (and there’s no point in asking about the same game now we knew how it worked). I predicted that we would, as a matter of fact, act in pretty much the same way whether or not we should.

In other words, we probably hadn’t learnt anything easily generalisable except, perhaps, that academics are less rubbish at such things than one might expect. That was, perhaps, a pleasing result but not obviously what the session had taken as its aim (and a dangerous aim at that!). The universe of possibilities of any such exercise is just too large and its connection to aspects of life outside such exercises too much subject to interpretation to draw robust lessons about when such ways of acting would be effective strategies and when not. Too much is contextual.

On the other hand, everything can form the basis for some sort of analogical thinking and, by strange coincidence, as I cycled back to the station I encountered just such a possible instance. The traffic lights which control an off-set four way road junction of two dual carriageways had broken and cars were making their ways gingerly across it. With no top down control, responsibility had fallen to individual drivers who were exercising a degree of courtesy so that even on a bike it did not seem too dangerous. So bottom up order from chaos worked. But did it work better than traffic lights? And would it work for other junctions? Hard to generalise from that particular case.