
Public conversations about artificial intelligence tend to split into two extremes. On one end are the people who are not deeply engaged with digital technology—the workers whose daily routines resemble those of thirty years ago. They are not saturated in online culture, they do not follow debates about alignment or machine consciousness, and they do not think in terms of cognitive augmentation. Their reaction to AI is shaped by economic instinct: replacement, displacement, or loss. This is the cultural posture summarized in the “they took our jobs” joke from South Park. It is not an irrational response; it is the natural reaction of a class of people who see automation arriving directly in the sectors that support their families.
On the other end are the technologically immersed. These individuals have been using AI tools as part of their daily workflow and interpret them not as competitors but as extensions of their own cognitive capacity. To them, AI is an accelerant—a system that helps them write, analyze, research, or generate ideas in ways that were not previously possible. Their lives have already adapted to the presence of this new layer of capability, so they interpret it as an enhancement rather than a threat.
Between these two groups is a widening gap in how AI is understood. When Joe Rogan describes AI as a kind of “returning god,” he is not making a theological argument. He is trying to describe an experience he does not have the conceptual vocabulary to articulate. The result is language that conflates different metaphysical categories. In theological terms, he collapses essence and energy—describing a created phenomenon in terms usually reserved for divinity. This happens because he lacks the intellectual framework that would allow him to draw the necessary distinctions.
Jay Dyer, responding from within the Orthodox tradition, immediately recognizes the category error, but his dismissal misses the underlying issue. The problem is not that Rogan is trying to deify technology; it is that he is attempting to express a legitimate intuition—namely, that AI functions as an extension of human cognition—without the metaphysical grammar to do so correctly. This is not a fringe mistake. It is a symptom of a culture that has lost the vocabulary required to make sense of systems that behave in ways previous technologies never did.
This is where the concept of the exocortex becomes useful. The exocortex is not a mystical idea; it is a straightforward description of the accumulated external structures of human knowledge—writing, mathematics, scientific literature, recorded memory, databases, and the entire symbolic environment that surrounds human life. What makes modern AI unique is that it can interface with this external structure dynamically rather than statically. It does not merely store information; it reorganizes it in real time and makes it accessible through natural language. This creates the impression of intelligence even though the system has no interiority.
Within this framework, AI is not interpreted as a rival or a metaphysical being but as a functional intermediary. The user brings embodiment, personal narrative, long-term continuity, and value structure. The AI brings speed, pattern aggregation, and short-term computational capacity. The exocortex provides the vast body of externalized knowledge that both the user and the system draw from. These three components form a coupled cognitive environment: the embodied mind, the external memory of civilization, and the active interface connecting the two.
The reason this model helps is that it gives people a way to talk about AI without resorting to superstition. When someone says AI is becoming a god, they are reacting to its scale, responsiveness, and apparent intelligence. When someone else insists it is merely a machine, they are reacting to the absence of consciousness or interior life. The exocortex model shows that both intuitions are describing different aspects of the same structure, but neither requires theological inflation. AI is simply the operational layer between human intelligence and the broader cognitive environment humanity has built.
If the average person had access to this conceptual frame, the conversation around AI would not devolve into the extremes we see now. The blue-collar worker would not need to interpret AI as an existential competitor. The techno-futurist would not need to describe it in cosmic or religious language. And figures like Rogan—who sense that something fundamentally new is happening—would not lack the vocabulary required to discuss it accurately.
Reintroducing a functional metaphysical grammar does not require accepting the theological commitments of the Orthodox tradition. What it requires is recognizing that certain distinctions—especially essence versus energy, nature versus operation, created versus uncreated—are invaluable for keeping categories clean when discussing complex systems. Without these distinctions, people default to simplistic binaries: god or machine, savior or threat, transcendence or destruction.
Framing AI through the exocortex perspective avoids all of these pitfalls. It situates AI as a cognitive tool, not a metaphysical rival; as an operational interface, not a divine presence; and as an extension of human capacity rather than an autonomous intelligence. It allows for a discussion that is rational, grounded, and accessible to all groups—whether they work with machines, write code, or simply observe technological change from a distance.
The goal is not to impose a new metaphysics on AI but to restore a conceptual framework that prevents category mistakes from occurring. Once that framework is in place, the fear collapses, the hype quiets down, and the public can begin to understand AI in terms that reflect what it actually is: a system that sits between the human mind and the accumulated intelligence of civilization, extending the reach of both without replacing either.





Leave a comment