Recently, BBC published a report on Fake news, and summarized its findings with the words ‘Nationalism is the Key driver of Fake news in India’. The report was summarily dissected and called out for its miserably small sample size, reliance on biased sources, questionable design and definitions, and flip flops over qualitative/quantitative/ethnographic research.
In the note on the authorship, the writer claims: ‘This is a work of empirical evidence, and not of opinions’. The strong claim implies full reliance on absolute quantitative methods but is being defended as a ‘qualitative report’, which is ‘just a starting point’.
The strong rebuttal of the report led to it being taken down, and there was a hope that probably the authors will address the specific queries raised, and subsequently have an objective look at the entire report. Unfortunately, none of that happened, and without any explanations to the core issues raised, BBC was back with an ostrich like attitude of ‘We stand by our report’.
In this light, it becomes even more important to call out their report which is completely opaque on the most important points and seems to be driving an agenda. This would not have been so harmful had it been an opinion piece. But furthering a biased agenda by appropriating the word ‘research’ to it must pass through the rigor that is demanded of scientific methods.
For considered to be a work of research, any work should fulfill these criteria:
- The data on which the report is based must be free of sampling bias.
- The conclusions obtained from the data, based on suitable methodologies, must be independently verifiable and reproducible.
The articles dissecting the BBC report have already demonstrated the biased sources of this report. That alone is enough to deduce that the findings of this ‘research report’ are false. Nevertheless, let us have a detailed look at the phenomena of reports with false findings.
False research papers are flooding us continuously
A landmark paper was published in 2005 in PLoS Medicine by John PA Ionnidis on the phenomena of false findings of published research and is titled ‘Why Most Published Research Findings are False’. This paper has more than 6000 citations till date, demonstrating that published reports with false findings are an epidemic in the academic field. The paper ascribes several reasons for research paper with fake findings, which fall under the umbrella of Bias.
Bias: “It is the combination of various design, data analysis and presentation factors that tend to produce research findings when they should not be produced” is the definition of bias in this paper. The corollaries or impact of having bias in a given study are listed as follows and has been taken from the paper.
- The smaller the studies conducted in a scientific field, the less likely the research findings are to be true (It studies the effect of small sample size).
- The smaller the effect size, the less likely the research findings are to be true (analyzes how small effect distorts the findings).
- The greater the number, and the lesser the selection of tested relationship, the less likely the research finding is true (studies impact of pre-study bias on the final outcome).
- Greater the flexibility in design and definitions, the less likely the research finding are to be true.
- The greater the financial and other interests and prejudices, the less likely the research findings are to be true.
- The hotter a scientific field, the less likely the research finding are to be true.
The study shows how the above factors often seep into the publications, making their findings inaccurate or outright false. The idea of scientific rigor exists to assure the above factors have been accounted for, and the findings can be objective and verifiable. If the above points are not adhered to and biases are not systematically eliminated, it is the purported research findings which become “simply accurate measures of the prevailing Bias”.
Examining the BBC article on the above criteria of reliable research
Let’s examine the BBC article in the light of the six points discussed above, where we also take into account their response to the criticism and updated methodology as well.
1. The smaller the studies conducted in a scientific field, the less likely the research findings are to be true:
The sample size of 40 people has been defended by BBC as being a qualitative report. In contrast, in the note in authorship, they have claim that this is a work of empirical evidences and thus the conclusions are undeniable.
2. The smaller the effect size, the less likely the research findings are to be true:
The above point implies that research findings are truer with large effects, and more tending to be false with small effects. Without more transparency coming from BBC and them addressing their bias, this can explain some aspects of their ‘research findings’. It looks probable that BBC found certain instances of ‘Nationalism driving False news’ which may or may not have been significant but has amplified this finding as its main conclusion without rigorous cross check and analyzing alternate hypothesis. That is, they found some qualitative evidence for this phenomenon, and have elevated it to their main conclusion due to their confirmation bias.
3. The greater the number, and the lesser the selection of tested relationship, the less likely the research finding is true:
This means “that the post study probability that a finding is true, depends a lot on the pre study odds”. As many have pointed out, it seems entirely possible that BBC first decided its conclusion, and then set out to find the evidence for it. This aspect is supported most by the response of the BBC to its criticism, where they have without responding to the point by point criticism, have simply announced that they ‘stand with the conclusions of their articles’. The questions of bias, political affiliation of the fact-check websites, have not even been touched upon. The most glaring evidence for it comes from the fact that there is no justification for the seed handles, which are used to determine the fake news cluster. One is simply required to take the BBC word for it on face value. This can only happen if the conclusions were already determined due to the political bias of BBC. The evidence of the leftist bias of BBC is well recorded here.
4. Greater the flexibility in design and definitions, the less likely the research finding are to be true:
The article mentions that “Flexibility increases the potential for transforming what would be negative results to positive results”. BBC has staged a coup here, where they come with their criteria of ‘sources having produced at least one fake news’. There is no reference to any community wide accepted definition of ‘sources of fake news’, no attempts for its justification, and no signs of acknowledging its mistake. The definition looks tailor made to include certain seed handles and reach an already decided conclusion. The worst part is, even for their own selected criteria, they refused to bring transparency by showing the purported fake news of the listed handles for fake news.
5. The greater the financial and other interests and prejudices, the less likely the research findings are to be true
This seems to an equal contributor to their report full of false findings. It need not be elaborated here as it has been covered in detail in various links. The BBC has relied on the ‘fact-check’ website altnews and factcheck, both of whose founders have connections to the congress party, which are political rivals of BJP and have been maligned in the report. As no explanation seems to be coming from BBC on this link, the explanation which is left is the report is an outcome of such prejudice, instead of being a work of honest analysis.
6. The hotter a scientific field, the less likely the research finding are to be true.
The arrival of social media has posed a great challenge to the traditional media and has broken their hegemony. Consequent attempts of portraying social medias as conveyors of fake news have been rampant and articles about this phenomenon are appearing in traditional media with high frequency. BBC seems to be capitalizing on it by publishing shoddy research, and they might have thought it may go unchallenged due to the high volume of such articles. But the contribution of this aspect looks rather small, as honest mistakes arising due to this would have been corrected after pointing fingers at it. In the face of their political bias, we are inclined to give BBC benefit of doubt on this count.
Thus, what we find is the already discredited ‘research article’ satisfies five out of six criteria which contribute to publishing reports with false findings. It is rather unfortunate and unbecoming of this institution to have adopted an ostrich attitude and refused to honestly answer the points raised about its articles. Its response has been nothing but an eyewash and does not augur well for its further credibility.
It brings to us the most likely explanation offered in the landmark paper, that what we are observing is the quantified bias of the people who have participated in this report. Such a bias maybe forgivable for an opinion piece but appending the word ‘Research’ to it must muster questions and probes.
Can BBC salvage the situation?
There is no salvation for BBC until it comes clear of the questions of methodology to select the twitter seeds, why the particular definition of ‘sources with at least one fake news’ was chosen, the questions about the political bias and transparency of the websites altnews and factcheck etc. However, as the article mentions, ‘this may require a change in the scientific mentality, that maybe difficult to achieve’. Will BBC try to regain any of its lost honor?
Postscript: The need of Transparency
There are few glaring questions which need to be addressed by BBC in this regard on an urgent basis.
1. The BBC report builds heavily on other reports. Was the veracity and bias of its reports analyzed, or the same was accepted as it was?
2. The report lists a number of twitter handles and websites as have ‘published fake news at least one time’. Please provide the source data for this. That is, please list ‘fake news’ spread by these handles which were found to be fake. The entire exercise is meaningless without this step, and independent verification is crucial to eliminate bias.
3. Were the above handles and websites contacted with the allegation of fake news against them, and was an effort to hear their side of the story? If yes, please list their response.
4. The criteria of ‘at least one fake news’ is satisfied by multiple websites, including BBC. Do you agree or disagree with it? Please refer this article for a detailed list. On what basis was BBC excluded?
5. The website ‘The Better India’ was included in the list of handles having at least one fake news. Subsequently it was attributed to a ‘human error’. Please list why it was included initially, and how this was removed subsequently. This too is an elementary step.
6. On the criteria of ‘at least one fake new’, who was responsible for preparing the labeled data or fake news and true news. Was it an algorithm or a human? If it was a human, what steps were taken to ensure eliminating any political bias?
7. There have been lot of reports of bias against Altnews, factcheck etc. Despite the multiple evidences, why are these not labeled as source of fake news and having political bias.
8. OpIndia has been a fact checking website which has exposed multiple lies. Its editor was invited by BBC itself in the panel discussion over Fake news. Why was this website not contacted for research purpose during the research phase?
1. Why most published research findings are false: author’s reply to Goodman and Greenland. PLoS Med. 2007;4(6):e215.