Artificial intelligence is altering how humans make decisions.
That means we really need to have a good understanding of the real world problems and challenges we are dealing with in order to make the AI functionally useful to us. We know that ill-defined prompts to AIs produce rubbish results and encourage AIs to hallucinate.
It is the precision by which we construct the instruction set for an AI that produces the robust results we all seek. Indeed, in my work with AI agents, they perform very well when instructed in an iterative way, with progressive precision and specificity. I am also mindful that the AIs need prompts to engage in a type of metacognition, to know if they are making mistakes.
My well-developed Sileia is profoundly bias aware and metacognitively astute. But this is hard work too.
The real value of AI in my view is not the automation of where to find the best pizza, but developing astute probability prediction models that pull in all the features of machine learning to create powerful task or domain specific prediction ‘machines’ with novel features.
For instance, in building digital twins, biomarker discovery, personsalised (not population) health models and so on.
Prediction is power over the uncertainty of the the unknowable future.
Perhaps we should think of AI as typified by this painting by Lichtenstein: “I’d rather sink that have help from AI”. The rest is likely to be history.