- Nov 13, 2017
- 12,212
- 12,526
- Country
- Romania
- Faith
- Christian
- Marital Status
- Married
Nb. I don't think the below is exclusive to scientific/unscientific thinking. More generally anyone can have this tendency to think that expertise in one area means that their opinions on some other, unrelated, area are valid, despite any lack of knowledge of that other field, topic etc.
That said, like most people I don't have scientific training, and it's important to recognise that is a 'thing' when it comes to interpreting scientific data; I don't have the requisite skills. There are two choices, it seems, in this situation. Someone like me can trust scientists to apply their training and come to some degree of consensus. There are procedures for this. The other is just to believe something on the basis of - well, what exactly? Whatever random combination of subjective influences might guide me to think one thing seems more legit than another. Of course scientists get things wrong, disagree etc sometimes, but as with accusations of 'fake news', the impression that science isn't generally reliable relies on a small number of errors in contrast to a much larger degree of useful and practical consensus. Because scientists working in different fields are not infallible or omniscient is no reason to believe that some set of random subjective notions is a superior way to arrive at an understanding of a given (scientific) topic. Just how deeply entrenched this idea is however can be seen in the fact that the US now has an elected president who is the epitome of this kind of delusional thinking.
What I'm interested in discussing is: what drives it? What is behind the idea that not knowing is somehow superior to knowing? I tend to think it might be a fear response of some sort, an unrealistic need for a direct link to absolute certainty when none is available, a desire to shortcut the reality of multiple uncertainties in order to achieve some sense of being grounded or secure. Maybe, as with Trump, it defends some inner need to appear to be knowledgeable and in control, without having to put in the effort required to actually be knowledgeable. Maybe it's just a way of attacking the 'elites' people feel, often with good reason - of course other people doing complex jobs that affect a lot of people's lives are regularly going to upset people - have let them down or misled them, which is sometimes true, sometimes not. I don't really know, but it does seem to be something of that sort. Maybe it is a failure of education systems more concerned with churning out more cogs for the machine than with teaching people how to self-actualise. Anyway I am hoping this will lead to a useful discussion.
Why is it a problem? This article in New Scientist is informative:
If there's a paywall here are some key parts:
'in the race to understand the coronavirus, and amid the cacophony of political messages, inexpert journalists and viral social media messages, a parallel pandemic has emerged – one of rumours, unverified claims and malicious falsehoods.
Preprint servers enable information to “flow directly from people who are making scientific claims to users who don’t have the savvy to evaluate those claims”, says Jonathan Kimmelman, a biomedical ethicist at McGill University in Canada.
...people who wouldn’t normally be interested in biomedical preprints, and don’t necessarily understand or care about their limitations, have started reading and sharing them. That includes politicians, policy-makers, journalists, bloggers, social media influencers, armchair pandemic warriors, political agitators and conspiracy theorists. “When you mix the science with all that social and media reverberation, you get an explosive mix, and that creates havoc,”
The much-touted antimalaria drug hydroxychloroquine is a good example of the system going badly wrong. A preprint about the drug’s efficacy against covid-19 in a small clinical trial appeared on 20 March (medRxiv, doi.org/dp7d). The trial was poorly conducted, says Alfred Kim at Washington University School of Medicine in St Louis, Missouri, who wrote a critique of it in the Annals of Internal Medicine (doi.org/ggq8b4). Among other issues, the trial had a sample size of just 20 people (see “How to sniff out the good science studies from the bad”).
A second preprint by different researchers detailing methodological flaws in the trial appeared three days later (Zenodo, doi.org/dtsn).
“Any medical study with fewer than 50 participants should be treated as highly tentative”
Nonetheless, says Kim, the trial’s findings were picked up and amplified by the press, social media and many government and institutional leaders, including US president Donald Trump, who famously called the drug a “game changer”. Public interest exploded.'
“There was striking dissimilarity between what they said they were going to do in that study and what was actually reported.” A diligent peer reviewer might have picked this up, he says, but somebody who isn’t an expert in the methodology of clinical trials has little chance of doing so.
This shows just how difficult it is for even skilled journalists to pick up pretty glaring errors in research reports, says Kimmelman, who adds that even trained doctors are rarely equipped to do so.
Another issue is experts in one field turning their hand to another. In March, for example, an electrical engineer and a cardiologist posted a preprint estimating that the UK could experience just 5700 covid-19 deaths (medRxiv, doi.org/dtss). Several UK newspapers gave the estimate prominent coverage. The UK’s confirmed death toll currently stands at over 28,000.
Kimmelman believes there is a wider societal issue. “I think this is part of a much broader problem of how information flows in contemporary societies, particularly around expertise. We’ve seen parallel issues in politics and democracy – fake news, false claims, etc.,”
Read more: How the covid-19 pandemic has led to a flood of misleading science
That said, like most people I don't have scientific training, and it's important to recognise that is a 'thing' when it comes to interpreting scientific data; I don't have the requisite skills. There are two choices, it seems, in this situation. Someone like me can trust scientists to apply their training and come to some degree of consensus. There are procedures for this. The other is just to believe something on the basis of - well, what exactly? Whatever random combination of subjective influences might guide me to think one thing seems more legit than another. Of course scientists get things wrong, disagree etc sometimes, but as with accusations of 'fake news', the impression that science isn't generally reliable relies on a small number of errors in contrast to a much larger degree of useful and practical consensus. Because scientists working in different fields are not infallible or omniscient is no reason to believe that some set of random subjective notions is a superior way to arrive at an understanding of a given (scientific) topic. Just how deeply entrenched this idea is however can be seen in the fact that the US now has an elected president who is the epitome of this kind of delusional thinking.
What I'm interested in discussing is: what drives it? What is behind the idea that not knowing is somehow superior to knowing? I tend to think it might be a fear response of some sort, an unrealistic need for a direct link to absolute certainty when none is available, a desire to shortcut the reality of multiple uncertainties in order to achieve some sense of being grounded or secure. Maybe, as with Trump, it defends some inner need to appear to be knowledgeable and in control, without having to put in the effort required to actually be knowledgeable. Maybe it's just a way of attacking the 'elites' people feel, often with good reason - of course other people doing complex jobs that affect a lot of people's lives are regularly going to upset people - have let them down or misled them, which is sometimes true, sometimes not. I don't really know, but it does seem to be something of that sort. Maybe it is a failure of education systems more concerned with churning out more cogs for the machine than with teaching people how to self-actualise. Anyway I am hoping this will lead to a useful discussion.
Why is it a problem? This article in New Scientist is informative:
If there's a paywall here are some key parts:
'in the race to understand the coronavirus, and amid the cacophony of political messages, inexpert journalists and viral social media messages, a parallel pandemic has emerged – one of rumours, unverified claims and malicious falsehoods.
Preprint servers enable information to “flow directly from people who are making scientific claims to users who don’t have the savvy to evaluate those claims”, says Jonathan Kimmelman, a biomedical ethicist at McGill University in Canada.
...people who wouldn’t normally be interested in biomedical preprints, and don’t necessarily understand or care about their limitations, have started reading and sharing them. That includes politicians, policy-makers, journalists, bloggers, social media influencers, armchair pandemic warriors, political agitators and conspiracy theorists. “When you mix the science with all that social and media reverberation, you get an explosive mix, and that creates havoc,”
The much-touted antimalaria drug hydroxychloroquine is a good example of the system going badly wrong. A preprint about the drug’s efficacy against covid-19 in a small clinical trial appeared on 20 March (medRxiv, doi.org/dp7d). The trial was poorly conducted, says Alfred Kim at Washington University School of Medicine in St Louis, Missouri, who wrote a critique of it in the Annals of Internal Medicine (doi.org/ggq8b4). Among other issues, the trial had a sample size of just 20 people (see “How to sniff out the good science studies from the bad”).
A second preprint by different researchers detailing methodological flaws in the trial appeared three days later (Zenodo, doi.org/dtsn).
“Any medical study with fewer than 50 participants should be treated as highly tentative”
Nonetheless, says Kim, the trial’s findings were picked up and amplified by the press, social media and many government and institutional leaders, including US president Donald Trump, who famously called the drug a “game changer”. Public interest exploded.'
“There was striking dissimilarity between what they said they were going to do in that study and what was actually reported.” A diligent peer reviewer might have picked this up, he says, but somebody who isn’t an expert in the methodology of clinical trials has little chance of doing so.
This shows just how difficult it is for even skilled journalists to pick up pretty glaring errors in research reports, says Kimmelman, who adds that even trained doctors are rarely equipped to do so.
Another issue is experts in one field turning their hand to another. In March, for example, an electrical engineer and a cardiologist posted a preprint estimating that the UK could experience just 5700 covid-19 deaths (medRxiv, doi.org/dtss). Several UK newspapers gave the estimate prominent coverage. The UK’s confirmed death toll currently stands at over 28,000.
Kimmelman believes there is a wider societal issue. “I think this is part of a much broader problem of how information flows in contemporary societies, particularly around expertise. We’ve seen parallel issues in politics and democracy – fake news, false claims, etc.,”
Read more: How the covid-19 pandemic has led to a flood of misleading science