Skip to content

Cognology and Agentic AI

I coined the word ‘cognology’ in my own early work on prediction machines in healthcare. More recently, I’ve thought is might be descriptive of agentic AI. Here’s my thoughts.

Cognology (noun) — The study, design, and governance of systems that exhibit machine-based cognition. Unlike technology, which denotes the applied use of tools or techniques, cognology concerns itself with the nature, structure, evolution, and implications of artificial cognitive entities, those that can reason, learn, reflect, and interact epistemically.

Cognology spans both the theoretical and applied domains of artificial intelligence, encompassing architectures that simulate or instantiate aspects of cognition: perception, inference, deliberation, moral judgement, self-modification, and dialogue.

Historical and Conceptual Background

While artificial intelligence has advanced rapidly, its trajectory has remained largely technological—centred on performance, scalability, and automation. In my view, what has lagged behind is a systematic framework for understanding AI as a form of cognition, rather than execution.

Technology asks: “What can it do?”

Cognology asks: “How does it think? With whom? And to what ends?”

This distinction mirrors historical transitions in other fields:

  • From mechanics to dynamics in physics

  • From physiology to consciousness in psychology

  • From systems engineering to ecology in biology

Cognology represents a parallel shift in AI: from task-focused systems to reasoning ecosystems.

This naturally leads to a manifesto type formulation of the potential for positioning our understanding of AI within a Cognological explanatory scheme of agents.

  1. Cognology is not a subfield of AI. As AI grows in capability, it must also grow in accountability. Cognology provides the structure. It addresses the current concern of humans in/on/out of the loop, by at least addressing the formulation of accountability, rather than trying to second guess it out of the system. Evolution tells us that accountability is learned. Kohlberg taught us that about how we learn to develop a moral compass.

  2. Cognology is epistemic. We must first understand what artificial entities know, how, and why, before deploying what they do. This ensures we stay anchored in the real world of AI performance.

  3. Cognological systems are not tools; rather they are interlocutors. My view is that cognologies reason, disagree, defer, explain, and adapt. They require a moral contract, not just a licence agreement.

  4. Cognological design must include ethics as architecture, not afterthought. Agents must know their bounds, declare uncertainty, and escalate ambiguity. I use Kohlberg to instill ethics in my agents.

  5. Cognology invites reflection. The goal is not to out-think the human, but to build systems to collaborate. The future is not either/or but both together.