After siting on this idea for well over a year, I got the push today to talk about it. But le me get back to that.
I have always been fascinated by the notion of tacit knowledge.
When the image above popped up on my monitor after watching something on Youtube, I got so vehemently disgusted by one of the faces that I just had to ask myself why.
Can you find the face? The face of the self-congratulating, cocky, smug, leftist [expletive deleted]?
Just look at her web-site. The most prominent thing on her homepage is a revoltingly vulgar animated gif of a dripping female genitalia. That’s how she advertises herself. That is what she thinks is the most important thing she has to communicate about her self and her world. Isn’t it legitimate to call her a c**t? Just in case it’s not clear, I would find the image of a dripping male organ just as offensive. What is offensive is the vulgarity and the intent to shock behind it.
Why was I not surprised by the vacuous sloppiness of her web-site? What made it so obvious just looking at her face? How did I know that she is stupid before even hearing a word from her mouth? Why did I have such a visceral reaction?
Watching the interview justified every bit of my very negative first impression. The woman is revoltingly stupid, but this confirmation still does not answer my question: how did I know? What betrays in her physical appearance the smug virtue signaling that seems to be the central focus of her existence? Even the vulgarity is part of the virtue signaling: “Look how brave I am! I stand up to the patriarchy!”
The question is, again, NOT who and what she is, but HOW, whatever that is, can be recognized in her appearance?
What is the foundation of tacit knowledge? How do we collect the elements of this knowledge? Presumably, it is built up from previous experiences, but how? How much knowledge do we posess that we never conceptualized or verbalized? Can we even call it knowledge? Is it built in bias? Is it prejudice?
Or is it just ‘instinctual good judgement’? From this particular encounter, watching the interview, I got EXACTLY what I expected based on my first glance impression.
No matter how well this knowledge serves me, I’m afraid I am not supposed to listen to it. But I can’t NOT listen to it. Even if I could, I would have to ask why not? What makes it a bad thing? What makes any kind of knowledge a bad thing?
What made me pick up the subject again was this BPS video talking about a recently developed Chinese AI algorithm that can pick out criminals from a series of pictures with 89% accuracy. From what I understand, it looks like the AI algorithm managed to conceptualize what we cannot. Isn’t that good thing? Isn’t knowledge ALWAYS a good thing?
My first encounter with the question was in an earlier life, when I was administering Szondi tests as part of a large-scale study. Feel free to read all about it, it is a personality test based on expressing likes and dislikes of six sets of images of eight personality archetype. This is one of the sets:
It is not used widely any more. Its psychometric reliability is in question.
I am not a psychologist and was not interested in the test’s utility which, on the other hand, clearly measures one thing: our statistically significant consistency in reacting to the eight archetypes. We react to something, without the ability to verbalize what it is precisely that we are reacting to. We would not have developed this ability, if it wasn’t essential for our survival as a species.
The outcry about the Chinese algorithm is loud, even if it only does what we instinctively do for millions of years, only – and maybe, a little better. The problem is that we do not like it. Reality is a bitch. Truth is a bitch. In an earlier post about artificial intelligence I tried to point out how dangerous it is to program human stupidity into artificial intelligence. Because in the end, the question is not about technology or artificial intelligence. The question is about us. Are we smart enough to ask honest questions? Are we wise enough to accept the answers?
And if we are not, who’s fault is that? The algorithm’s?