The AI agenda is for me all about augmenting human reasoning; what I call cognology (cognitive focus) to distinguish from technology (physical focus). This is the core challenge to work flow and adoption.
Here are some thoughts on the application of John Boyd’s OODA decision making from military decision making to healthcare decision making.
Boyd developed OODA to characterise decision making by fighter pilots who must react quickly. Success lay in cycling through this more quickly than the opponent.
OODA means: Observe, Orient (interpret), Decide (from options), Act. The faster a person can work through that process (reason) is evidence of quicker decision making and interpretation of evidence.
Artificial Intelligence has a role in each of these steps. It becomes quite important to know where to focus AI capabilities, what operational benefits flow from that and indeed what the wider impact of AI in clinical reasoning might be.
At root, that means being clear about what aspect of human reasoning is being addressed by AI and where in the decision making process.
What we’re seeing with AI and which is what has caused the most concern for critics is the risk that AI’s significant augmentation of human reasoning along the OODA process could in the end replace humans. My view is that we need to know where the AI augments and how, and where the AI replaces and why.
A worrisome example is AI in combat, with autonomous/semi-autonomous drones, the former having the capability of acting without human intervention: humans are ”out of the loop”. Healthcare, too, offers the potential for clinicians to be “out of the loop” and in the absence of adoption of augmented reasoning by clinicians, the AI could dominate by default.
Boyd’s model looks like this:
The AI computational models are very good at dealing with the complexity illustrated of decision making. I’d suggest much AI is still at the first two O’s: computational modelling of tumours, for instance and suggesting where highest risk lies. We are beginning to see the D being addressed when clinicians are presented with treatment options (such as referral of a patient with a hitherto unknown diagnosis for genetic testing as not referring was the default clinical decision — this is related to work I’m involved with on patient finding and undiagnosed rare conditions). Much AI has helped with OOD. It is the A that is the coming challenge and which has the potential to take humans ‘out of the loop’ and allow the AI determine actions, e.g. automatically having the patient referred.
The reason this matters is that clinical processes involve prediction about what health outcomes will be obtained from what treatment intervention. Here’s an example: AI is outperforming clinicians in diagnosis (using ROC figures). The prediction models I’m working with for identification of patients with rare diseases operate at an ROC of about 0.9 and when clinicians review the output as part of augmenting reasoning, the AI’s ROC jumps to over 0.97, suggesting almost certainty of a rare disease diagnosis. At present, patients with rare diseases experience an average of 7 years to a correct first diagnosis and may see as many as 20 different clinicians on that journey. AI cuts that to ‘hours’ and fewer wasted clinical encounters. This means the OODA cycle becomes more precise and much quicker from the patient’s perspective.