Covid-19 : Imperial Hubris – Reproducabiliy isn’t Correctness

Imperial College (an Alma Mater on mine) seems to be engaged in damage control after Prof Neil Ferguson’s consistently inaccurate modelling predictions regarding the Covid-19 outbreak.

Their latest claim is that the results of their program CovidSim are “reproducible”. Well bully for them and that’s as maybe, but that doesn’t mean the results are correct, just consistent.

Their modelling software is being used in a safety critical applications. For example forecasting demand for hospital resources needed to cope with the pandemic amongst other things. But by massively overestimating likely hospitalisations tens of thousands of routine operations and investigations for other diseases like Cancer have been shelved or postponed. The aftermath of that is only just becoming apparent.

The process of writing and verifying Safety Critical software is very rigorous. I should know as I ran a Company specialising in just that for 45 years. When Prof. Ferguson originally released the software for public scrutiny is was immediately apparent that it was written informally and without rigour. Various Professionals where swiftly brought in to at least shore it up, Microsoft included.

So this (probably) factually true but not necessarily relevant Press Release seeks to airbrush the software with a pleasant sheen.

The CovidSim software is written in the language C which is specifically unsuitable for Safety Critical code as the language is poorly defined in many areas making verification of design almost impossible.
Public Health is a safety critical application, many people can die
when results wrong and used for Policy decisions.

The Press Release includes this statement:

“Some world-leading software engineers have helped scrutinise, review and improve Imperial’s code and modelling, including John Carmack, the legendary videogame developer.”
Having the software blessed by an eminent computer games author is unlikely to cut it with the Health and Safety Executive …

Commenting in April, John Carmack said that the code “fared a lot better going through the gauntlet of code analysis tools I hit it with than a lot of more modern code. There is something to be said for straightforward C code. Bugs were found and fixed, but generally in paths that weren’t enabled or hit. [my incredulous italics] Similarly, the performance scaling using OpenMP was already pretty good, and this was not the place for one of my dramatic system refactorings. Mostly, I was just a code janitor for a few weeks, but I was happy to be able to help a little.”

This simulator and those of other modellers continued to have a dominant influence on Public Health Covid-19 Policy until late November 2021. Utter madness, the practical results are evident.

As Brian Tracey used to say:

“If you only do what you always do
You’ll only get what you’ve always got”

Leave a comment

Your email address will not be published. Required fields are marked *