Glossary
Following are working definitions used throughout Countdown to Machine Consciousness. These are anchors for clarity, not final philosophical claims.
Intelligence
The capacity to learn, model, plan, and adapt in pursuit of goals across varying contexts—especially under uncertainty.
Agency
The capacity to initiate actions in pursuit of goals, maintain internal state over time, and revise behavior based on outcomes. Agency ranges from narrow (task-bound) to broad (long-horizon and self-maintaining).
Autonomy
A practical measure of how independently a system can pursue goals in the world: selecting actions, handling novelty, recovering from error, and continuing without continuous human supervision. Autonomy is a spectrum, not a binary.
World-model
An internal representation—explicit or implicit—of how the world works, allowing prediction, planning, and causal inference. A system’s world-model may be partial, brittle, or domain-bound, but it is central to long-horizon agency.
Memory
The ability to retain and use information across time. In this project, “memory” includes both internal state (what the system carries forward) and externalized memory (what it retrieves or logs), because both can support continuity of behavior and apparent personality.
Planning
The ability to select actions over time in pursuit of goals, including sequencing, tradeoffs, and contingency handling. Planning becomes more significant as time horizons lengthen and environments become less predictable.
Embodiment
Meaningful coupling to the world through perception and action—sensors, effectors, constraints, and feedback loops that shape cognition over time. Embodiment is not only a humanoid body; it is being embedded in a world where actions have consequences and learning is grounded.
Grounding
The connection between symbols, language, or internal representations and the world they refer to, typically mediated by perception and action. Grounding reduces the degree to which a system’s “understanding” is merely verbal.
Consciousness
A contested term. In this project, “consciousness” refers to subjective experience—what it is like to be a system, if anything is. The project does not assume machines are conscious; it tracks whether the enabling conditions that might make consciousness plausible are strengthening.
Sentience
The capacity for felt experience, especially the experience of pleasure, pain, distress, or well-being. Sentience is often treated as morally relevant even when agency or intelligence is limited.
Personhood
A moral and legal category, not merely a biological one. Personhood typically implies standing in a community of rights and responsibilities. The project treats personhood as a debated downstream question that may arise if agency and/or sentience claims become credible.
Moral agency
The capacity to recognize reasons for action, deliberate about them, and be appropriately held responsible. Moral agency implies some form of understanding of norms and consequences, and some degree of control over action.
Moral patiency
The capacity to be morally wronged—to have interests that deserve protection even if the entity is not a moral agent. In many ethical traditions, sentience is a strong candidate basis for moral patiency.
Stewardship
The obligation to manage power and resources responsibly for the flourishing of others and the stability of the systems that sustain life. In this project, stewardship is a central lens for evaluating claims about whether advanced AI could become a caretaker rather than merely an instrument.
Alignment
The problem of ensuring that advanced AI systems reliably pursue goals compatible with human values and safety constraints. Alignment is treated here as a practical governance and engineering challenge, not as a settled technical domain.
Safety
A broader category than alignment. Safety includes misuse, unintended behaviors, security vulnerabilities, sociotechnical harms, deployment incentives, and governance failures—especially when systems are embedded in institutions and markets.
Embodied AI
AI systems that perceive and act in the physical world (robots, autonomous devices, sensorimotor agents), as distinct from disembodied systems that operate primarily through text, images, or code. The distinction matters because physical action introduces new kinds of consequence, risk, and learning.
Emergence
A phenomenon in which system-level properties arise from interactions among parts, where the property is not straightforwardly attributable to any single component. The project treats consciousness-as-emergent as a hypothesis, not a presupposition.
“Machine consciousness” (minimal claim)
Not “a soul in a box,” and not a guarantee of human-like experience. The minimal claim is that a machine system might develop subjective experience as an emergent property of sufficient complexity, sustained interaction, and integrated world-modeling—especially in embodied, agentic systems.
“Machina sapiens”
A term from Deus ex Machina Sapiens used here to name a future AI that is not merely intelligent, but plausibly sentient and morally significant—potentially including the capacity for moral reasoning and stewardship. The term marks a threshold concept: AI as participant in moral reality, not just as tool.
