When Faraday first tested his hypothesis for electromagnetic induction, he followed the scientific method. First, he formed a theory and then tested it experimentally. Physics in particular has always followed this strict process with the aim of achieving objective, unbiased results. However, with the volume of data now available, modern science may soon follow a different method. As we’re relying on AI to analyse this data, biases in AI systems could affect the integrity of the scientific process and perhaps usher in a new era of science built upon finding patterns in data instead of forming hypotheses.

Data-driven Bias

The first form of bias evident in machine learning is known as “data-driven bias”. This occurs because data is never collected fairly - datasets may be uneven across categories, or scientists may have a hypothesis in mind when collecting the data. Machine learning only exaggerates the issue: as the dataset is fed into the algorithm, it becomes prejudiced toward frequently occurring classifications in an attempt to improve its overall accuracy. When the model is then exposed to real-world data, its accuracy is much lower on classifications it has seen less often.

An example of this could be an image classifier. Say we wanted to classify images of fruit; if the dataset we train our model on has many more images of apples than strawberries, our final model is likely to be more accurate on apples than on strawberries, indicating a bias.

This has huge implications in scientific research: what if particle detection models were biased towards certain particles due to their frequency? We’d end up misclassifying rarer particles due to the model’s low accuracy on them. Furthermore, the overall accuracy may still be very high so the bias may be easily missed.

Correlation Fallacy

Correlation fallacy is an interpretation-driven bias that can be introduced to AI systems. Humans often confuse correlation with causation, and computers can make this same mistake. For example, a skin cancer identification algorithm developed by Stanford University, once considered a landmark achievement, was shown to actually be looking for rulers in images, instead of detecting cancers on the skin. This is because, in the data it was trained on, cancers were often shown alongside a ruler for scale, whereas healthy skin was shown without a ruler. This meant that the algorithm was instead classifying the presence of a ruler and would misinform a high proportion of patients. From the AI’s perspective, rulers somehow caused skin cancer!

This has a host of implications for science, which is already a battleground between correlation and causation. As we use AI to identify patterns in data, will we fall victim to this issue even more frequently? Or will we settle for correlation in place of causation, setting ourselves up an era of what The Guardian refers to as “post-theory science”? The danger of this, of course, is that we could lose sight of the “truth” science is built on and look only for patterns that hold most of the time.

However, so-called post-theory science has yielded extraordinary breakthroughs in recent years. AlphaFold, the algorithm built by DeepMind that can predict the structure of proteins, is an example of this. The algorithm is very accurate but still, rather unpredictably, makes mistakes. AlphaFold is a massive breakthrough and could pave the way to improvements in healthcare, but in a sense, it is also an example of how we no longer search for the whole truth and instead settle for approximations of it with the help of AI. Whilst we know the structure of the neural network used in AlphaFold, we don’t really know how the AI truly works. In other words, AI has answered the “what” of our problem but not the “why”.

Automation Bias

The final stage of bias in an AI system comes not from the algorithm itself, but rather the interpretation of its outputs. This can occur in a number of ways, specifically from humans mistaking the AI’s predictions for truth.

In the judicial system, this has immediate consequences. An example of this is the COMPAS algorithm which was used in the USA to predict the chance of prisoners reoffending. Whilst the algorithm had equal accuracy across groups, it was shown by ProPublica to have a higher false-positive rate for people of colour. This meant black people were more likely to be falsely predicted to reoffend than white people. The algorithm highlighted several issues relating to our use of AI in society.

Firstly, there is currently minimal regulation for AI algorithms. Whilst anti-discrimination laws apply, contrasting definitions of fairness can make it difficult to enforce. Because of this, algorithms such as COMPAS have been used widely before being flagged. If biased algorithms slip through and are used in research, imagine the complex issues they could cause!

Secondly, judges often took too much notice of the predictions COMPAS produced and failed to consider whether the algorithm could be wrong. This reliance on AI more than our own judgement spells badly for science. If we become blind to the shortcomings of AI, we might misinterpret the results it gives us and reach incorrect conclusions because of it. Furthermore, we’ll fail to recognise the significance of the biases it introduces.

Conclusion

We may never return to the strict method of classical science due to the ambiguities introduced through AI, but we’ve also gained a new set of data superpowers. AI will help us understand increasingly complex patterns and will lead to vital insights across every field. But if we rely on it too much and ignore the biases it introduces, we risk losing the integrity of the scientific method. Most worryingly of all, we risk no longer caring for the deeper “why” questions that AI struggles to answer.

This article was originally published in the Eltham College Science Magazine (2022).

Bibliography

Towards Data Science. "Publication bias is shaping our perceptions of AI". Wheeler, Nicole. Published 2019.

The Guardian. "Are we witnessing the dawn of post-theory science?". Spinney, Laura. Published 2022.

DeepMind. "AlphaFold: a solution to a 50-year-old grand challenge in biology". The AlphaFold Team. Published 2020.

ProPublica. "Machine Bias". The ProPublica Team. Published 2016.