Weirdness and detachment
Many debates don’t get started for real. The participants don’t engage with the other side’s arguments. Or some of them don’t. You see this maybe in particular in hot-button political topics. But also in the discussion on AI risk, which many have discarded without really engaging with the relevant arguments. That includes famous professors.
It’s often suggested that we should cultivate curiosity and open-mindedness to counter such tendencies. On that view, we’re naturally close-minded. That’s why we’re discarding ideas we don’t like out of hand. So the solution is that we should become more curious of weird ideas. We should celebrate entertaining them. We should hurrah for open-mindedness.
That’s often an improvement on the contrary close-minded attitude. But at the same time, it doesn’t seem to me to be quite the right attitude either. The close-minded person looks at a weird idea and says “I’m going to penalise this idea because it’s weird”. And they have associated emotions. The person who celebrates weird ideas rather gives them a bonus, and have associated emotions.
But a third approach is to try, as far as possible, to neither give a bonus nor a penalty to ideas just because they are weird. And to try, as far as possible, not to feel very strongly about them either way. Or at least not to let our emotions influence our evaluation of ideas too much.
Historians often say that we shouldn’t fall for the temptation to create emotionally appealing narratives about the past, but should just study “how things actually were” (von Ranke; German: “wie es eigentlich gewesen”). Similarly, we should, as far as possible, disregard the emotional tone of ideas. That AI risk is “weird” should, in itself, neither speak against it nor its favour. Instead we should just look at what the evidence actually says, without being too emotional about it.
Of course, this is easier said than done. We don’t have complete conscious control over such attitudes. But there are some things we can do to make it more likely that we get the right attitudes.
One thing that likely helps is to get down into the empirical details. Another to try to debate with civility.
Yet another thing one might want to avoid is metaphors and framings of AI risk that can trigger either a “let’s reject this weirdness”-mindset or a “let’s celebrate this weirdness”-mindset.
Instead, we should try to stay more detached.
Related to Emotions and the Search for the Truth.