I coined the word ‘cognology’ in my own early work on prediction machines in healthcare. More recently, I’ve thought is might be descriptive of agentic AI. Here’s my thoughts.
Cognology (noun) — The study, design, and governance of systems that exhibit machine-based cognition. Unlike technology, which denotes the applied use of tools or techniques, cognology concerns itself with the nature, structure, evolution, and implications of artificial cognitive entities, those that can reason, learn, reflect, and interact epistemically.
Cognology spans both the theoretical and applied domains of artificial intelligence, encompassing architectures that simulate or instantiate aspects of cognition: perception, inference, deliberation, moral judgement, self-modification, and dialogue.
Historical and Conceptual Background
While artificial intelligence has advanced rapidly, its trajectory has remained largely technological—centred on performance, scalability, and automation. In my view, what has lagged behind is a systematic framework for understanding AI as a form of cognition, rather than execution.
Technology asks: “What can it do?”
Cognology asks: “How does it think? With whom? And to what ends?”
This distinction mirrors historical transitions in other fields:
-
From mechanics to dynamics in physics
-
From physiology to consciousness in psychology
-
From systems engineering to ecology in biology
Cognology represents a parallel shift in AI: from task-focused systems to reasoning ecosystems.
A Taxonomy of machine learning machines
This taxonomy describes the architectural features of machine learning machines.
-
Procedural Machines: Execute predefined logic or scripts.
-
Learning Systems: Adapt based on data but lack structured reasoning.
-
Reflective Agents: Include awareness of their own knowledge and limitations.
-
Epistemic Collaborators: Interact across agents, domains, and humans using shared epistemic protocols.
-
Cognological Architectures: Multi-agent systems with layered ethics, explainability, and adaptive learning.
-
Federated Reasoning Ecosystems: Distributed, co-reflective, narrative-traceable systems capable of reasoning across human interactive constellations of relationships.
-
Narrative Stewards: Agents capable of maintaining causality, coherence, and ethical reflection over time and across contexts.
A Manifesto for an Agentic Future
In my view, this naturally leads to a manifesto type formulation of the potential for positioning our understanding of AI within a Cognological explanatory scheme of agents.
-
Cognology is not a subfield of AI—it is its horizon. As AI grows in capability, it must also grow in accountability. Cognology provides the structure. The addresses the current concern of humans in/on/out of the loop, by at least addressing the formulation of accountability, rather than trying to second guess it out of the system. Evolution tells us that accountability is learned. Kohlberg taught us that about how we learn a moral compass.
-
Cognology is epistemic before it is technological. We must first understand what artificial entities know, how, and why, before deploying what they do. This ensures we stay anchored in the real world of AI performance.
-
Cognological systems are not tools—they are interlocutors. Unlike how many conceptualise AI as a digital tool, my view is that cognologies reason, disagree, defer, explain, and adapt. They require a moral contract, not just a licence agreement.
-
Cognological design must include ethics as architecture, not afterthought. Agents must know their bounds, declare uncertainty, and escalate ambiguity. I use Kohlberg to instill ethics in my agents.
-
Cognology invites reflection, not replacement. The goal is not to out-think the human, but to build systems that help us think better—together. The future is not either/or but both together.