I’ve spent a delightful couple of days in Cardiff as a guest of Harry Collins and Rob Evans’ SEESHOP 5. In case it is of broader interest, I’ve transcribed my rough notes below. Whilst in general its main concerns are not mine and I am embarrassed in such circumstances how little I properly research the empirical issues on tacit knowledge, it seems very helpful to get a kind of snapshot of this empirical programme.
William Thomas kicked off with an overview of history’s interest in Wave 2 social science (STS) and a suggestion that history could borrow more recent work on expertise to examine, for example, the development of operational research in the bringing of science to bear on the war effort in WWII.
Theresa Schilhab addressed a worry stemming from the relation between interactional and contributory expertise. The former involves the kind of non-coal-face expertise of people who lack a particular concrete skill but can still discuss the skill as though they had it. This suggests that they have appropriate conceptual mastery even if there are related things they cannot do. But she was worried that this might be challenged through brain imaging results.
She began by commenting that enactivism has come to dominate thinking about the neurological underpinnings of conceptual mastery. She then discussed a couple of empirical examples of brain imaging. First, it indicates brain activity in areas responsible for smelling cinnamon, for example, is found in those merely thinking of it. Second, different areas of the brain are involved in thinking about objects which were, as a matter of fact, manipulable and those not for subjects prompted to sort objects not according to this but rather another distinction. She used this to motivate the problem: if activity or direct experience underpins conceptual mastery, and if one person has not had the relevant practical experience whilst another has, how could they have the same concept (eg of cinnamon)? Translated into the context of SEESHOP, what of someone with practical and interactional expertise versus someone with just the latter?
Her suggestion was that early life establishes a basic set of concepts and that subsequent ability to think, for example, abstract thoughts was based on this foundation. This seemed a modern retread of Lockean empiricism. It did prompt the question in my mind: what mediates the process of abstraction from early concrete thoughts to later abstract thoughts? Is that process itself conceptually mediated and if so what explains the possession of the concepts involved in the translation (concepts that permit the generation of an abstract concept from a concrete concept)?
But the way the worry was set out seemed to me just a mistake. If we think that brain activity is constitutive of conceptual mastery, it might follow. But the brain imaging results don’t establish that constitutive claim (that the brain activity just is the conceptual mastery) nor an enactivist version of it (which is what was initially flagged). A representationalist, for example, could accommodate the imaging data so it alone does not support enactivism. (Who is to say what the semantic significance of the brain activity is? Perhaps it is an interesting output of thinking the thought.) Nor does it support the constitutive reading since one might simply say the observed brain activity is some part of the causal underpinning of a conceptual mastery. But if it is merely part of a locally sufficient causal story (ie sufficient given our make up), then differences in brain activity between the active and the indolent need not have any implications for concept individuation (this is just multiple realizability in the philosophy of mind applied to concept possession). My hunch is that the underlying worry would need something like an enactivist or other practical account of concept mastery tying concept possession directly to contributory skills rather than this detour via brain imaging.
Robert Crease (pictured) discussed the recent history of worries that particle accelerators might destroy the world and the interestingly different responses made to those by scientists. For example, should they be rebutted by theoretical or empirical arguments? (Such as: ‘the same conditions would have occurred on the moon but it’s still there, albeit with some large holes’ versus ‘theory rules it out’. Sadly scientists in their careful way tended to say merely that such catastrophic results were ‘implausible’.)
Perhaps the most optimistic element of his account was his description of the Daily Show’s take. They filmed the ‘point: counter-point’ structure of the scientists and a particularly influential horticulturally grounded scaremonger in such a way that it displayed the context: the madness of turning the choice of whom to trust to the media. But the lingering question was how such questions should, in general, be assessed, of whether one should simply assume trust in the scientists or hand it over to non-scientific lawyers was central to the SEESHOP agenda.
(Darrin Durant suggested that a particular answer could be given by addressing whether, in the context, the scientists concerned were implicated in a politics one does not want. In the case of black hole creation there seems no particular reason to be suspicious of their motives. By comparison, the politics of the science of nuclear waste, he suggested, were highly suspect. I wasn’t sure why politics should play this role rather than any other potentially distorting factor. One could say that one should examine whether, in the context, there is reason to have doubts. Later over dinner Darrin explained why it was a matter of politics but this margin is too small to contain his convincing answer.)
Mark Addis reported on an AHRC funded project of being a resident philosopher in the construction industry. The background was two recent reports on the under performance of UK construction: eg poor business models, lack of capacity, talent, purpose, leadership but also only 3% profit on projects and a general lack of trust. One response to this from government policy was a trend towards increasing IT involvement and away from expertise (although this move is contested by industry bodies). Similarly the Construction Skills Council had called on the adoption of an approach called ‘lean construction’ despite the controversy about what it means and whether it should be adopted.
His project’s aims included both ground level issues for improving construction industry but also meta-level reflection on how this reflected back on philosophy including impact. The method included inter alia identifying problematic assumptions about knowledge and organisation and the aim to replace a model of construction activity which starts with knowledge predating thinking about doing predating doing with a rival account starting with doing, a narrative of the outcomes leading to a new explanation of doing.
Nicolas Schunck attempted to locate a notion of ‘commonness’ behind the idea of collective tacit knowledge a kind of collective lack of disagreement rather than explicit agreement. This might be an interesting idea but he wanted to approach it through a worry about language: that language articulates ‘immediacy’ in such a way that distances us from or distorts what was pre-articulated: ‘commonness’. But his account of what was added was just the conceptualisation. To talk of a storm moving thus and so adds storm individuation, movement, a direction etc. But why that was a distortion wasn’t clear. Surely the notion of distortion would only be falsity at the level of articulation? And what went missing seemed to be simply its not-being-articulated status.
The problem though was that he then wanted to break the rules of his own game and say something about the pre-articulated, to compare it with the articulated (by saying that things were added or subtracted), to persuade us of the distortions. If he were right, however, he would be in no position to do this. Taken seriously, his position would surely collapse back into every day (empirical) realism. That is all we have access to, by his own rules. (He did suggest that we had a glimpse of the pre-articulated in what changed when someone says of a cosy social setting: this is nice. If only philosophical insight / gesturing at the preconceptual was just a matter of hanging out in socially awkward ways: my career would have been so much better.)
Harry Collins described what he called the ‘domain specific discrimination’ of physicists. He’d been sent a paper from an academic physicist, published in a radical physics journal, which argued that the current multi-million dollar industry aimed at detecting gravity waves could not work. He had sent this to a number of highly reputable physicists and asked them whether they would read or generally take seriously such a paper. It turned out that they would not although none offered a technical argument against the paper. They would instead rely on a judgement that the paper had too many ‘markers of crankness’. But, argued Harry, that is an essential recognitional skill given the number of new papers. The anti-relativity industry, he suggested, had had enough time to make an impact but has not been successful.
There followed three papers on what’s apparently called ‘argumentation studies’ (AS) and its relation to SEE. Gabor Kutrovatz outlined various normative approaches to assessing expertise. He had examined blogs in Hungary discussing both the issue of whether the moon landings were faked and the issue of H1N1 vaccination and counted the kinds of moves made by participants. Few used structural arguments compared with both simple deference to experts or pretending expertise themselves. Gabor Zemplen outlined how an AS approach could be applied to codify Newton’s changing views of the physics of colour. Jean Goodwin outlined what she thought was a model of a way to provide a reason for members of the public to trust experts whose expertise they could not directly assess. Within a kind of black box model of interaction of citizen and expert she proposed a ‘blackmail and bond transaction’ in which the expert has both to be explicit, provide an explicit account of their expertise and undertake to stand in risk of loss of status. As Rob Evans pointed out, however, her black box contained only one expert and thus didn’t address the deep problem of conflicting experts. As was also pointed out, trust and accountability can be opposed and this seemed to be a model of the latter.
Sadly at this point my netbook ran out of steam. I recall that Mike Gorman described the experiences of a couple of humanities students working in science labs and Evan Selinger and Andrew Berardy outlined a proposal to use TFL teaching techniques to teach students interactional expertise in thermodynamics in order to address issues in sustainability. One interesting nugget was that thermodynamics has a kind of generally accepted codified exam in its conceptual mastery. They proposed to add the imitation game to that to test tacit knowledge.
I gave a paper exploring my worries about balancing the tacit and the knowledge statuses of tacit knowledge and got some interesting questions. One worry I did take away both from that session and from the workshop as a whole was this. More than once ‘tacit’ seemed to be opposed not to ‘explicit’ but ‘explicable’ and that is surely a mistake (a mistake of not distinguishing between sense and reference). (Cf my earlier worries about robots and cycling here.)