We Should Get Better at Consciousness
December 15, 2021

I.
First, you will die. But even if longevity sciences defeat aging, or Ray Kurzweil figures out how to upload and digitize human consciousness, you still won't make it. The universe will eventually die, too. Everything slumps and crawls towards oblivion, in due time. A 'heat death', physicists say. As the universe expands, it cools. As it cools, new stars will cease to form. Solar systems and entire galaxies will disband as planets are flung from orbit and consumed by larger dying entities. The Earth has 5, maybe 6 billion years until our sun swells into a red giant, boiling the oceans and incinerating what flesh remains. Beyond our sun, our galaxy, the heat death continues. All protons will decay, and the matter that life is built of will disperse. The black holes will evaporate, no new matter will form, and the universe will cool, and cool, until it reaches thermodynamic equilibrium, at which point nothing can happen (since happenings require temperature differentials), and the universe will sit motionless for eternity.
This all being a not-so-unlikely outcome, here's my question: if every building will disappear, if every relationship will end, if every cause will fade, if everything we know, love, do, build, imagine, and desire, if it all gets wiped away in a chilled, motionless silence, what should we do? How should we live? What should we care about?
I don't mean this in that all too familiar, banal sense: is there any meaning in the universe? Do we have a purpose? I mean something more pragmatic. Is there anything we should be doing? Is there any way of living that's prescribed by our cosmic situation? Is there anything that matters enough, in the face of impending annihilation, that we should still, defiantly, orient our lives around it?
One fashionable response is a playful nihilism, embodied in the American novelist Kurt Vonnegut's line: "I tell you, we are here on Earth to fart around, and don't let anybody tell you different." It's tempting! Nothing really matters, so you might as well have a good time. Do what you please, and what pleases you.
The existentialists went a step further. Since meaning isn't intrinsic to the universe, but a construct of our own minds, we're all free (or condemned) to create our own meanings, our own stories about what matters. Again, nothing really matters, so we get to decide.
At times, I've inhabited both perspectives. They're fruitful to wrestle with. But as final perspectives, they strike me as wrong, and disastrously consequential. I'll get more into why they're wrong, and why consciousness does actually matter so much that it refutes nihilism later in this essay.
But wrongness aside, these narratives can be toxic, and self-fulfilling. If they ascend to define a culture, you get a rudderless society that erodes its coordination power. You get a society that'll doomscroll itself straight into oblivion, uninterested and mostly incapable of changing course. You get a society that's oblivious to its own potential, and too sclerotic to realize that potential anyway. Instead, it'll anesthetize itself until it's too late, and there really is nothing left to do but fart around until everything burns to a dead, silent crisp.
I'm frightened by how familiar this sounds.
But I'm hopeful, because I believe two conveniently related things: first, for a society to function and evolve in a generally healthy way, citizens need to share, loosely, in some collective idea of something that matters. This cannot be just anything. The thing that matters - whether pleasing God, racial purification, economic growth, enlightenment, freedom, or democracy - steers societies in particular directions, and all directions are not equal. But we're in luck, because my second belief is that the human condition is defined by just such a Good thing that we can all agree matters: the enrichment of consciousness.
Most of this essay will be devoted to elaborating what I mean by "enriching consciousness", but here's the claim I'm ultimately getting at:
So far as humans are concerned, consciousness is the most important thing in the universe, and everything we do should be oriented towards improving it.
By 'consciousness', I mean a particular aspect, what the philosopher of mind Thomas Nagel calls the subjective character of experience, or the what-is-it-like-ness:
"...no matter how the form [of consciousness] may vary, the fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism...We may call this the subjective character of experience."
Being alive has a distinctive feeling to each of us. Our past experiences, perceptual faculties, genetic inheritances, social and physical environments, and qualities of attention all knit together to produce a holistic sensation of what it is like to be us. In general, similar patterns characterize each of our subjectivities, and yet, no two are identical. We know, for example, the general flavor that defines all plums, and yet, each plum has its own flair.
By 'enrichment' of consciousness, I mean helping it do what it's already been doing for billions of years: evolving what the bio-philosopher Andreas Weber calls experiential depth:
"The only factor of nature that expands is its immaterial dimension, which could be called a depth of experience: the diversity of natural forms and the variety of ways to experience aliveness."
One of the defining through-lines across the history of Earth's evolution has been the deepening of consciousness. The expansion of subjective states available for experience. That holistic sensation of what it is like to be conscious, to experience subjectivity, to exist, is on a remarkable evolutionary trajectory. From prokaryotes to humans, depth of experience may be the most striking axis of differentiation.
With recent developments in computational neuroscience and neural networks, we can even show this increasing experiential depth mathematically. Here's Thomas Metzinger, a leading philosopher of mind (& longtime meditator):
“The mathematical theory of neural networks has revealed the enormous number of possible neuronal configurations in our brains and the vastness of different types of subjective experience. Most of us are completely unaware of the potential and depth of our experiential space…your individuality, the uniqueness of your mental life, has much to do with which trajectory through phenomenal-state space you choose.”
The vastness of potential types of subjective experience available to us (not to mention other systems, like rainforests, or mycelium networks) raises the question of agency. How free and capable are we of steering ourselves through these landscapes of possibility? Or, how determined is our experiential matrix by circumstances beyond our control? If we don't choose our own configurations of subjectivity, who, or what, does?
Today, we're in the curious position of being a frontier product of this increasing experiential depth, as well as a primary obstacle to its continuation.
...
II. The Point of Eating Your Broccoli is Consciousness
If we're to get out of our own way and stop impeding the expanding experiential depth of consciousness, we'll need significant collective buy-in to the idea that consciousness is extremely, existentially important. I suggest consciousness goes beyond mere importance; it is, in fact, the most important thing.
Children are born with an intuitive understanding that normative claims about how one ought to live require fundamental reasons. Consider the common scene, where an adult tells a child to do something, and the child responds with a relentless stream of probing questions, ultimately trying to understand why they should do the thing they're being told, for which the adult usually gets exasperated before hitting upon a satisfactory answer that cannot be undermined by yet another question:
Adult: "Do X."
Child: "Why?"
Adult: "Because of Y."
Child: "But why Y?"
Adult: "Because of Z."
Child: "Ok but why Z?"
Adult: "Arghh!"
This isn't nonsensical; the child is looking for the Socratic bottom. She's looking for a reason that isn't arbitrary, that isn't predicated on something else, but has its own spine. If we dig deep enough, I suspect we find that it's always and only enriching consciousness that holds up at the bottom.
Why is it important that kids eat their broccoli? To regularly exercise? To end poverty? To improve education? None of these, I contend, are inherently good. Why should we be healthy? One can live a perfectly long, unhealthy life, as the present state of average American lifespans and diets confirms (albeit yes, with higher odds of losing that life to heart disease, obesity, etc). Health is good because it broadens and nourishes the spectrum of possible experiential states. Health is good for consciousness.
You can run the same exercise for anything. Like poverty. Why is poverty bad? A few superficial answers aside, and we might stumble upon something like "justice". It's unjust that so many live in such poverty, while others right next-door live in such wealth, due to an opaque mix of luck, inheritance, and skill. Or, it's unjust that the wealth of a few is itself predicated upon the poverty of others, and deploys its own power to entrench the dynamics upholding the imbalanced arrangement.
I take these arguments seriously. But why do we recognize poverty as undesirable in the first place? I like Amartya Sen & Martha Nussbaum's capabilities framework, which sees poverty as a deprivation of basic capabilities required to live in ways one has reason to value. But even beneath this - why is deprivation of capabilities bad?
Because, I contend, it's fundamentally a deprivation of the potentiality for better states of consciousness. It's like stuffing a budding plant in a jar too small, leaving its limbs no room to grow. We recognize denying something its inherent capacity to flourish, to develop and unfurl and expand, as bad. Perhaps we understand poverty as a denial of just this. Poverty is bad for the development of consciousness, a constraint on its potentiality, and this is intrinsically bad.
From this perspective, the point of 'progress' is to enrich the environments that contextualize & guide the evolution of consciousness. But of course, as these things go, many argue that the past 400 years have seen just the opposite take place.

...
III. Leaving Consciousness Behind
Our culture is sliding away from consciousness, increasingly subject to "extrinsic drift", writes the neuroscientist and novelist Erik Hoel. Extrinsic drift is a gliding outwards, away from the interiority of consciousness, away from treating consciousness as a phenomenon that really, fundamentally matters:
"We take the extrinsic perspective on psychology, sociology, biology, technology, even the humanities themselves, forgetting that this perspective gives us, at most, only ever half of the picture. There has been a squeezing out of consciousness from our explanations and considerations of the world. This extrinsic drift obscures individual consciousnesses as important entities worthy of attention."
Drifts have origins; where did ours begin? In Galileo's Error, the philosopher of mind Philip Goff traces the drift back to Galileo. Specifically, his distinction between primary qualities and secondary qualities. With this distinction, Galileo carved reality at the joint between subjective and objective, between consciousness and consensus reality, setting aside the former, giving rise to the scientific method that could explain the latter.
For Galileo, there existed a primordial schism between objective properties like size, shape, location, and motion, and subjective properties like taste, smell, melancholy, and wonder. Subjective things were matters of the human soul, and thus properties interior to the human rather than parts of the external and objectively existing world. In his view, science was to deal with the objectively existing world. By setting aside soul and subjectivity, the remainder of the universe could be captured and explained via the quantitative language of mathematics.
Goff:
“Just as beauty exists only in the eye of the beholder, so colors, smells, tastes, and sounds exist only in the conscious soul of a human being as she experiences the world. In other words, Galileo transformed the sensory qualities from features of things in the world—such as lemons—into forms of consciousness in the souls of human beings...Galileo...created physical science by setting the sensory qualities outside of its domain of inquiry and placing them in the conscious mind. This was a great success, as it allowed what remained to be captured in the quantitative language of mathematics.”
This read of Galileo's work isn't unique to Goff. In Scientific Objectivity and Its Contexts, the Italian philosopher Evandro Agazzi describes Galileo's carving of reality thusly:
"Among the accidents of physical bodies, Galileo distinguished those that depend on the sensory abilities of the observer (colours, smells, and so on, later called 'secondary qualities'), and are therefore subjective - and thus not 'of physical bodies' - from those that are intrinsic to the body (the quantifiable and mathematisable qualities, later called 'primary qualities'), which he calls, for this reason, real accidents. It is only with these real accidents that natural science is concerned, and it can be so concerned efficaciously by adopting mathematics as a means for describing them, thanks to measurement."
In Galileo's 17th century, separating matters of the soul from science may not have seemed terribly consequential. The church, which at least professed to deal with matters of the soul, was still perhaps the most prominent social institution around. But as the scientific method propelled society into its modern form, and the church declined, we developed evermore sophisticated ways of manipulating and describing the objective world, while subjectivity was left where Galileo severed it. What we've come to understand as 'progress' has grown untethered from consciousness. We take the extrinsic perspective simply because we've let the alternative atrophy, like a muscle gone limp.
And if we continue in this groove, where might we end up? One haunting possibility, described by the philosopher Nick Bostrom: if you ignore consciousness long enough, it simply goes away. You get a "Disneyland with no children". A wonderfully advanced, efficient, (automated and algorithmic) productive society where the inefficiencies of subjectivity have been so thoroughly neglected, and have atrophied so deeply, that there's 'no one there', no one left to experience the industrious wonders to which consciousness was sacrificed:
"We could thus imagine, as an extreme case, a technologically highly advanced society, containing many complex structures, some of them far more intricate and intelligent than anything that exists on the planet today – a society which nevertheless lacks any type of being that is conscious or whose welfare has moral significance. In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit. A Disneyland with no children."
Scott Alexander comments: "The last value we have to sacrifice [to this cold notion of 'progress'] is being anything at all, having the lights on inside. With sufficient technology we will be 'able' to give up even the final spark."
Without consciousness, there is nothing left. Only, as Hoel writes, "a sign in the desert that seems to be pointing nowhere until its flickering neon lettering is read: There is something it is like to be a human being. And what it is like matters. The sign points to what cannot be seen."
It matters more than anything else in the universe, so far as we're concerned.

...
IV. Consciousness Ethics
Enlightenment style rationality, the industrial revolution, digital computing - these hallmarks of modernity have enabled immeasurable good. But they have also functioned as a collective of blacksmiths huddled around an anvil, the raw material of consciousness resting on top, each taking their turn crashing their hammers down onto it, smashing their tools onto consciousness, a rhythmic molding process ongoing for centuries now. What shape is emerging? What sorts of blacksmiths are these?
Bostrom and Alexander fear there is no shape emerging, but simply a flattening of consciousness out of existence, compressing the interior space, smaller and smaller, leaving less space for the intrinsic perspective to inhabit, like a moth stuck in a shrinking jar on the classroom shelf, having to progressively fold its wings until they can no longer extend, until they forget they could ever extend, until one day, there may be no intrinsic perspective left, a perfectly flat subjectivity with no interior, a window onto the world with no one looking out, an empty perspective, a moth who's forgotten it ever had wings.
Against such flattening, we could reframe progress as the enrichment of the environments - physical, social, mental - that produce consciousness. "Enrichment" can be understood as increasing both the experiential depth of potential subjective experiences, and increasing people's agency in being the steward of their own path through these expanding depths.
Experiential depth and agency become the twin-pillars of a framework for what I'll describe as a consciousness ethics.
The phrase, 'consciousness ethics', comes from Metzinger's work. Motivated by significant increases in our capacity to change our states of consciousness - via obvious means, like psychedelics, nootropics, or gene editing, but also less obvious means, like the consciousness-altering effects of new technologies, global communication platforms, the attention economy, and so on - Metzinger urges us to begin wrangling a deceptively simple question: What is a good state of consciousness?
He offers a preliminary sketch, an intuition, of three conditions that a 'good' state of consciousness should satisfy:
- It should minimize suffering in all beings capable of suffering
- It should have "epistemic potential", or the capacity to expand knowledge
- It should foster behaviors that raise the probability for the sustained emergence of valuable states of consciousness in the future
Above, I reiterated my own axioms. A consciousness ethics should seek to:
- Increase the experiential depth of potential subjective experiences
- Increase an organism's agency in navigating that increasing depth
These two sets of axioms can melt into each other. The 'point' of increasing agency is to allow subjects greater capacity to minimize suffering, or, if they so choose, to undertake voluntary suffering in pursuit of some larger payoff, which they should be free to do. Additionally, suffering and agency are generally at odds with each other. To cause another subject suffering is to reduce their agency. We can thus see minimizing suffering as a subset of increasing agency.
Increasing "epistemic potential" is also related to experiential depth. In the next section, I'll describe experiential depth as the diversity of ways we might experience any given moment of consciousness. Expanding experiential depth is to expand the possibility space of how we might experience anything at all. I would then argue that the 'point of expanding knowledge' is to expand precisely this possibility space for consciousness, given the condition of minimizing suffering/increasing agency.
This leaves Metzinger's third axiom, which comes down to sustainability. A good state of consciousness should foster behaviors that don't foreclose on the sustained emergence of good states of consciousness in the future. Sustainability is a prerequisite for expanding depth or increasing agency. If consciousness dies out, so does our capacity to develop along these trajectories. This doesn't only apply to climate sustainability, but it's a good example. If we decimate the earth's biospheres, kneecap its biodiversity, and render much of its land uninhabitable, this would wreak havoc on the possibility landscape for the types of consciousness that arise.
Melting Metzinger's suffering axiom into agency, and expanding knowledge into experiential depth, simply adding his third axiom provides a nice synthesis, a rough framework for a consciousness ethics:
- We should act so as to increase the experiential depth of potential conscious states
- We should act so as to increase the agency subjects have in navigating the expanding subjective state-space
- We should foster behaviors that raise the probability for the sustained emergence of increasingly deep and agentic states of consciousness in the future
With this crude sketch, we can dig deeper into what some of this this stuff means.
...
V. What is Experiential Depth?

I. We should act so as to increase the depth of potential conscious states
Recall Andreas Weber's definition of experiential depth: "the diversity of natural forms and the variety of ways to experience aliveness." This is nice, but poetic. I'd like to attempt greater analytical formalism to Weber's claim. What is experiential depth, and what does it mean to increase it?
Parsing Weber's definition a bit, experiential depth is the capacity to experience consciousness in different ways. The more depth, the greater the variety of states of consciousness, of phenomenologies, available to an experiencing subject.
Depth thus refers to a set of possibilities, rather than any particular realized possibility. There isn't one type of experience indicative of depth - depth comes into view when we survey the range of possible experiences.
Consider a squirrel confronted by a distinct set of stimuli, like a firework show crackling directly above. What is the range of qualia - loosely meaning subjective experiences - available to the squirrel? Most likely, squirrels will always feel the same way: alert and fearful. But humans have a much larger range of potential qualia available when confronted with the same fireworks. Some may marvel at the beauty of the kaleidoscopic colors. Others may feel frustration at the sharp sounds, or joy over the communal affair of a family sitting together on a blanket, watching the show. Some may, like the squirrel, feel fear at the sharp sounds.
The difference between squirrels and humans doesn't require deep elaboration. But experiential depth varies between humans, too, albeit at smaller scales of difference.
Consider an anecdote from the writer and broadcaster Danny Baker, describing a moment of absorption:
"...I pick up sister Sharon’s teeny pink and white Sanyo transistor radio and switched it on. I looked up at the clear blue afternoon sky. Ike and Tina Turner’s “River Deep, Mountain High” was playing and a sort of rapturous trance descended on me. From the limitless blue sky I looked down into the churning, crystal-peaked wake our boat was creating as we motored along, and at that moment, “River Deep” gave way to my absolute favourite song of the period: “Bus Stop” by the Hollies. As the mock flamenco guitar flourish that marks its beginning rose above the deep burble of the Constellation ’s engine, I stared into the tumbling waters and said aloud, but to myself, “This is happening now. THIS is happening now.”
We may not all love mock flamenco guitar, but we might all understand the gist of this experience. The cultural theorist Mark Fisher describes the phenomenology of this absorption as a sense of "exorbitant sufficiency".
Accessing states of consciousness like this exorbitant sufficiency requires more than just the right music. They require the depth and flexibility of consciousness that enables such transitions in the first place. It's easy to imagine that a squirrel lacks the requisite depth to feel what Baker felt while listening to his sister's transistor radio and staring into the churning water tailing his boat's engine. But similarly, I've explored elsewhere how variations in economic conditions can lead to variations in the kind of cognitive flexibility that affords such moments of deep absorption in the first place.
Overworked, sleep-deprived, stressed out workers face, on aggregate, greater cognitive rigidity than those who live with greater assurance of their economic wellbeing. This isn't to say that no one living in poverty can experience rapture. I've lived in India, I know it's possible. But it is to say that on the whole, poverty, or more precisely, the felt-sense of acute economic precarity, engenders cognitive rigidities that reduce our capacity to exercise agency over our states of consciousness, including the flexibility to enter into alternate modes, like exorbitant sufficiency.
Experiential depth, then, differs across all sentient beings. It is the range of potential qualia, the quantum wave of phenomenological probabilities. It's modulated by just about everything - nature and nurture, genetics and biology, lived experience and circumstances, economic conditions, media technologies, diet, and so on.
But this is all still somewhat vague. To make things more concrete, we can explore four proxies that provide imperfect, but helpful formalizations of what experiential depth is.
- Encephalization Quotient
- The connectome map of neural connections
- Counterfactual flexibility
- Perturbation Complexity Index (PCI)
A note about proxies. None of these strike me as correct ways of proxying experiential depth. Neither is Gross Domestic Product (GDP) a correct way of understanding the health of the economy, nor CO2 emissions a perfect way of understanding climate change. But, if each is taken with their limitations in mind, they afford useful ways of seeing part of what's going on. Maps provide information about the territory, so long as we remember that maps are necessarily imperfect.
...
Encephalization Quotient (EQ)
EQ is a frustratingly chewy way of saying brain size relative to expected brain size, given body mass. Brain size tends to increase as body mass increases. But we've mostly thrown away the idea that absolute brain size is a useful indication of intelligence.
Instead, we developed average measures of expected brain size for different body sizes. The larger a particular brain's positive deviation from its expected size, roughly, the more intelligent we assume that brain to be.
Moving up the phylogenetic tree from slime molds to humans, cognitive capacity seems clearly linked to EQ. As a proxy for experiential depth, we would make the assumption that EQ is similarly linked to depth. The greater the EQ, the greater the repertoire of potential qualia.
...
Connectome maps

While EQ is concerned with brain mass, connectomes are maps of the brain concerned with the density of neural connections. Perhaps it isn't so much the size of a brain, as the density and complexity of its wiring that can tell us useful information about experiential depth.
However, researchers are uncertain whether more complex neural connections are associated with higher intelligence. Some research finds intelligence is positively associated with neural complexity, while other research finds the opposite.
...
Counterfactual flexibility
This one needs a little more explanation, but also gets closer to a more interesting proxy. Counterfactuals are the brain's imaginative capacity. They are imaginations of things other than actually-occurring reality. Counterfactuals can occur for any time horizon. The map of counterfactuals I might imagine for the next 5 seconds is relatively constrained. I might imagine myself standing instead of sitting, or raising my right arm, or calling out for my cat. If we expand the time horizon to 5 days, the range of counterfactuals I might imagine expands. I might be in Croatia. Or have read 2 books. Or have built a table.
You can map out this 'counterfactual tree', and see that the tree of counterfactuals expands with the time horizon (ignore the meditative depth on the right):

Counterfactuals are at the heart of the predictive processing view of the brain/mind. Our brains generate internal models of the world in order to plan, predict, and aid survival. The greater the counterfactual depth, the greater the number of imagined worlds we use in this process.
It's possible to consider counterfactual depth (the time horizon of counterfactualizing) as, itself, a proxy for experiential depth. In a way, it's obvious. The more imagined worlds I hold in mind, the greater the range of internal 'happenings'.
That being said, there's something off about making this conflation between depth and counterfactuals.
Counterfactual depth basically means I have a flurry of abstracted concepts concocted by my brain all superimposed upon each other. Thoughts layered upon thoughts; words layered upon words; abstraction layered upon abstraction. Pitch this to a Buddhist, and they'd say you're moving in the wrong direction.
Layered abstractions increasingly separate one from the depths of insight Buddhist meditation traditionally holds dear. Depths can be taken almost literally here, as insight is said to lie beneath the mind's habitual abstractions. Boundless experiential depth, tradition tells, is a primordial condition of the mind that gets obscured by homo sapiens' acquired habits of predictive processing. Suffering is an emergent property of abstraction. The cessation of abstraction is the cessation of suffering.
Why, then, would we use a measure of abstractions as a proxy for a positive ideal, like experiential depth? Aren't these opposing forces?
In fact, we can discern a very important difference between counterfactual depth and experiential depth. The entire array of counterfactually modeled worlds may still all be built upon the same patterns of phenomenology. If I have acute anxiety, I might counterfactualize 1,000 different worlds, and built into each of them is the same pattern or structure of feeling, the same 'way' of experiencing aliveness.
There is little experiential depth to a counterfactual tree with 1,000 branches cut of the same phenomenological cloth. This is only one degree of depth, containing the same mental habits and patterns that suffuse the same qualities into any and all imagined states of consciousness.
But counterfactual depth is not all forsaken as a useful proxy. We just need to be more specific. What we really want is counterfactual flexibility. We want a blooming tree loaded with counterfactuals, containing a diversity of potential ways of experiencing aliveness. We want the capacity to internally model not only different worlds, but different ways of feeling in these worlds, such that we may actually have a choice between them. When I face fireworks, I want the capacity to choose between marvel or frustration.
Experiential depth, through the lens of counterfactual flexibility, is the capacity of our predictive minds to include different qualia, different phenomenologies as part of the set of possible worlds we can imagine.
...
Perturbation Complexity Index (PCI)
PCI is among the first empirical measures that endeavors to gauge the 'level' of consciousness in a subject. Basically, you zap a small pulse through the brain, keep track of its travels via EEG, and afterwards run a compression algorithm that tells you how simple or complex its route was. Sort of like mapping the complexity of a pinball's course as it's bounced around the machine.
The more complex the pulse's travels, the higher the level of consciousness in the brain. As explained by Enzo Tagliazucchi, director of the Consciousness, Culture, and Complexity Lab, the complexity of a the pulse's travels can tell us about the richness - or dullness - of the possible states available to that cognitive system:
"...a perturbation of a system which has a small repertoire of possible states will result in a rather dull exploration of its (few) possible states, whereas a system rich with possible states will result in a more interesting and informative exploration. The first case will be associated with low complexity (low PCI), the second with high complexity (high PCI)..."
It's difficult to know much about what a "system rich with possible states" really means. It may fall just as far from being a good proxy for experiential depth as counterfactual depth, if a system is rich with possible states that all support the same phenomenology, the same flavor of qualia.
One of PCI's main virtues is being an "objective measure of consciousness that is independent of the subject's ability to interact with the external environment." And yet, this virtue makes it unclear whether PCI is useful in understanding a subject's experiential depth. Still, until we develop better measures, PCI offers a useful step in beginning the project.
...
But experiential depth isn't worth much if we don't have the agency to steer ourselves through it. What good is being able to counterfactualize about a wide array of potential ways of feeling aliveness, if we can't actually act upon that array, and choose the states of consciousness we'd like to have? It's a peculiar form of torture to show someone a delicious breakfast buffet, while they're locked into eating the same meal, being steered by forces beyond their control, merely able to view the possibilities that they're incapable of realizing.
This is why the second axiom, agency, is so important. Without it, experiential depth is torture. But together, expanding depth while increasing agency provides a powerful recipe for a meaningful kind of 'progress', a real sort of freedom.
In this spirit, let's now turn to agency.
...
VI. What is Agency?
II. We should act so as to increase the agency subjects have in navigating the expanding subjective state-space
At its core, agency may simply mean the freedom to choose.
But this is the sort of vague rhetoric that everybody can agree with, since it means absolutely nothing in practice. Conservative followers of Milton Friedman and democratic socialists in the lineage of Eugene Debs alike could agree that the freedom to choose - agency - should be a foundational ideal. But they would likely agree on nothing else, least of all how it should be pursued in practice.
So let's zoom in, and study agency at three different scales: cognitive, biological, and social. Each provides related, but distinct, ways of understanding what agency is, and how it might be achieved. As with experiential depth, a flurry of different vantage points may help create a more robust understanding of an otherwise nebulous thing.
...
Cognitive Agency
Cognitive agency is perhaps the most familiar scale, confining the question of agency to our own minds, to agency as we each experience it. But a growing chorus of cognitive scientists are now telling us, at least on one level, that cognitive agency is an illusion. The sense of agency, the feeling that we are the doer of our actions, the thinker of our thoughts, that it is 'I' who conducts my organism - this sensation is a misleading byproduct of the predictive mind.
And yet, there may yet be a capacity, a set of skills, that may rightfully be called agency at the mental level. Thomas Metzinger calls this mental autonomy.
...
Sense of Agency
What causes things to happen? Why might I decide to reach for a glass of water, or stand on a nearby bench and shriek the words of Allen Ginsberg's poem Howl? According to the Predictive Processing framework, the brain is constantly generating predictive models about the world. These models enable planning and prediction. They're quite handy for aiding survival. But when the brain imagines future events, it confronts this very question: what causes things to happen?
In order to imagine future events, the brain must model causality. A handy way of doing so is to imagine that 'I' am a coherent thing, a self, who decides to shriek a poem, or raise a water glass, even if no such substantial entity as a self exists.
Returning to Laukkonen & Slagter's paper, they write:
"...possessing a self-model is a natural consequence of active inference and prospection: One cannot predict the sensory outcomes of future actions without representing oneself in those actions as a hidden cause of changes in sensory input. For example, picking up a glass of water requires that we have a model of our body as an intentional agent who can pick up a glass of water. And indeed, we must have a model for the fact that we are an agent that needs such a thing as water."
But this sense of agency, Thomas Metzinger writes, is a "brilliant and parsimonious causal story, even if it's false...Empirically speaking, the self-as-agent is just a useful fiction or hypothesis, a neurocomputational artefact of our evolved self-models."
Cognitive agency, then, should not be conflated with the feeling that we are the cause of our actions. This would be like tethering a normative principle to a ghost. As James Moore (2016) writes, "...the brain appears to actively construct the sense of agency, and because of this, our experiences of agency can be quite divorced from the facts of agency."
What, then, are the real, non-illusory facts of agency, in cognitive terms?
...
Mental Autonomy

Returning to Metzinger: although the sense of agency is a "surface phenomenon, produced by the fact that the underwater, unconscious causal precursors are simply unknown to us", there still may be "other ways for the organism as a whole to shape what happens in its mental life."
Metzinger lists a series of agentive capacities that may yet improve our 'freedom to choose' our varieties of consciousness. These include:
- the ability to impose rules on one's own mental behaviour
- actively controlling the focus of attention
- explicitly selecting mental goals
- guiding the flow of thought in accordance with reason
- capacity to intentionally end an ongoing mental process (veto control)
Together, these capacities make up what Metzinger calls mental autonomy. Mental autonomy is a second-order capacity that acts upon the first-order contents of consciousness. As Krzysztof Dołęga puts it: "Mental agency...is a second-order representational faculty which takes first-order mental processes as its objects."
A good way to illustrate mental autonomy is to point out an example of when we lose it: daydreaming.
A daydream simply happens to you. An event in which you 'lose yourself'. You've lost veto control over your ongoing mental processes, and no longer have agency over guiding the flow of thoughts or the focus of attention. This loss of mental autonomy is not the end of second-order modulation over first-order mental content. Rather, it's a temporary loss of the knowledge that you possess the active ability to exercise mental autonomy. Second-order modulation goes on auto-pilot, but you aren't aware that it's done so. Until, that is, you 'come to'.
Mental autonomy is both a (fragile) type of knowledge that gets bundled (represented) into our self-models, and a trainable skill that one can exercise over their own consciousness.
Mental autonomy is a value that Metzinger holds dear. He writes, as "a working concept, mental autonomy is an excellent new candidate for a basic value that could guide us in education, policymaking and ethics." For his part, he urges that meditation be taught in schools, integrated as a core part of public curriculums. What could be more valuable than training the capacity to exercise agency over one's own consciousness, especially amidst an increasingly frenetic world that exerts its own control over consciousness, rarely in line with values worthy of our potential?
But Metzinger is clear that mental autonomy is not solely a matter of individual responsibility and practice. It's a socially constructed and sculpted phenomenon. Designing social institutions that enhance mental autonomy is a "neglected duty of care on the part of governments":
"What is clear by now is that our societies lack systematic and institutionalised ways of enhancing citizens’ mental autonomy. This is a neglected duty of care on the part of governments. There can be no politically mature citizens without a sufficient degree of mental autonomy, but society as a whole does not act to protect or increase it. Yet, it might be the most precious resource of all. In the end, and in the face of serious existential risks posed by environmental degradation and advanced capitalism, we must understand that citizens’ collective level of mental autonomy will be the decisive factor."
Elsewhere, I've written about the role economics might play in revamping a social commitment to mental autonomy. In practice, this may mean anything from policies like basic income and universal healthcare, to the democratization of companies and national assets. Connecting cognitive capacities like mental autonomy with specific policy proposals will require widespread debate, but there's isn't a moment to lose.
But if we're to move beyond individualized notions of agency, it may do us well to explore conceptions of agency that also move beyond the individual. How might we think about agency not in terms of an individual human mind, but all living systems, at any scale?
...
Biological Agency: Cognitive Horizons
Mental autonomy is a useful concept for judging the agency of a single organism, like an ant. Ants, relative to humans, don't have much mental autonomy. They don't likely have veto control over mental processes, or the ability to impose rules on their own mental behavior, or to actively control the focus of attention.
Clearly, ant colonies have significantly greater agency than a single ant. But as an evaluative concept, mental autonomy is rather blind to this sort of rise in agency. Neither, I suspect, can an ant colony exercise veto control, or actively guide the focus of attention, or impose rules on mental behavior. But ant colonies exhibit all sorts of astoundingly complex, coordinated behaviors that suggest a meaningful rise in agency of the system, seen as a whole. Ants participate in specific divisions of labor, can build rafts to float atop flooding, communicate about where to find food, disseminate warnings of attacks on the colony, and even farm other species to collect honeydew in the same way we farm cows for milk.
To make sense of this variety of agency, we can use a concept developed by the biologist Michael Levin and philosopher Daniel Dennett: cognitive horizons.
We can categorize and compare a living system's agency by mapping its cognitive horizon, the spatiotemporal dimensions of the goals it can represent and work towards:
"Call this the system’s cognitive horizon. One way to categorise and compare cognitive systems, whether artificial or evolved, simple or complex, is by mapping the size and shape of the goals it can support (represent and work toward). Each agent’s mind comprises a kind of shape in a virtual space of possible past and future events. The spatial extent of this shape is determined by how far away the agent can sense and exert actions – does it know, and act to control, events within 1 cm distance, or metres, or miles away? The temporal dimension is set by how far back it can remember, and how far forward it can anticipate – can it work towards things that will happen minutes from now, days from now, or decades from now?"
Behaviors like farming other species for food indicate a lengthened temporal dimension of the colony relative to an individual ant, while disseminating pheromones that warn against an attack, or locating faraway food treasures, indicate a colony coordinates behavior across much greater spatial distances than any single ant. It has a greater cognitive horizon.
A consciousness ethic compels us to increase agency. If we understand agency in terms of cognitive horizons, how might we do so? Metzinger saw mental autonomy both as a skill that can be trained by individual practice, and a capacity fostered by social institutions. To uncover a similar framework for increasing cognitive horizons, we can look back to an already established formula, the one used by evolution for thousands of years. Levin:
"The key dynamic that evolution discovered is a special kind of communication allowing privileged access of agents to the same information pool, which in turn made it possible to scale selves. This kickstarted the continuum of increasing agency."
The evolutionary algorithm for increasing cognitive horizons is connecting agents via shared information pools, which scales up selves. In this sense, a 'self' may be thought of as a sort of superorganism, wherein the borders separating distinct agendas of a multitude of smaller selves within a shared environment are erased. What remains is a collection of self-interested micro-systems, whose self-interests have all merged into one. A self is a local unity of perfect incentive alignment.
At the cellular level, evolution brings this alignment about by establishing bridges, connexin proteins, that allow "two neighboring cells to directly connect their internal milieus via a kind of tunnel through which small molecules can go." This bridge allows for the bidirectional flow of ions and signaling molecules. Molecules traverse the bridge so fast that for all intents and purposes, whatever happens to one cell now happens immediately to the other. Any incentive that may have existed for one cell to act against the interest of another is erased by their connexin bridge.
The incentive landscape is transformed, not by eliminating self-interest, but by merging and scaling it upwards. Perfect cooperation becomes the rationally self-interested option, and agency now has a larger, more capable system to work with. "The combined network of many cells", Levin & Dennett write, "has hugely more computational capacity than the sum of individual cells' abilities."
They continue:
"Crucially, this merging implements a kind of immediate ‘karma’: whatever happens to one side of the compound agent, good or bad, rapidly affects the other side. Under these conditions, one side can’t fool the other or ignore its messages, and it’s absolutely maladaptive for one side to do anything bad to the other because they now share the slings and fortunes of life. Perfect cooperation is ensured by the impossibility of cheating and erasure of boundaries between the agents. The key here is that cooperation doesn’t require any decrease of selfishness. The agents are just as 100 per cent selfish as before; agents always look out for Number One, but the boundaries of Number One, the self that they defend at all costs, have radically expanded – perhaps to an entire tissue or organ scale."
But now, how might we scale this framework up, using it to ask questions of human agency, rather than cellular? It's clear enough how the cognitive horizon of a single cell differs from the collective that is a human body. But how might we consider cognitive horizon differentials between humans?
Recall the definition of a cognitive horizon: the spatiotemporal dimensions of a living system's goals. The spatial dimension is determined by the distance at which the agent can sense and exert actions; 1 cm away? 1 mile? 1 galaxy? The temporal dimension is given by how far back into the past the agent can remember, and how far into the future its goals reach.
Cognitive horizons are, essentially, counterfactual trees. The more temporally thick (i.e., the farther out into the future) the tree of counterfactual imagination, the deeper into the future the goals can reach.
Using this association, we can decompose "increasing" agency into two categories: expanding, and improving.
- We can expand cognitive horizons by making near-term goals easier to satisfy, allowing attention to move up the counterfactual tree, into deeper time horizons.
- We can improve cognitive horizons by reducing biases in predictive cognition, thereby increasing the flexibility, or range, of counterfactual processing.
A quick elaboration of each.
...
I. We can increase cognitive horizons by making near-term goals easier to satisfy, allowing attention to move up the counterfactual tree, into deeper time horizons.
It's difficult to save for retirement if you can hardly afford rent, or groceries for the week's dinner. It's difficult to align your behaviors with the well-being of future generations, if the well-being of your own is under constant threat.
We could say that goal-oriented, agentive thinking follows a similar principle as Maslow's hierarchy of needs. A good way to facilitate climbing the pyramid is to satisfy a basic threshold of each underlying level. Similarly, an effective way to induce longer-term agentive thinking is to make shorter-term goals easier to satisfy. The less energy expended on lower levels, the more energy available for higher, longer temporal plans of agency.
But "short-term goals" are a broad category. What kinds of goals should we focus on making easier to satisfy, if our aim is increase the temporal depth of goal representation?
Put differently, if we're trying to free up the predictive mind, we should focus on goals given the highest precision weighting - the highest importance - within that system. This is a project I'd love an expert to take up, or explain to me: a ranked hierarchy of the relative importance of different kinds of experience, or qualia, as determined by precision weighting within predictive cognition.
Fortunately, I'm aware of at least one starting point. A concept Metzinger calls functional rigidity.
In Being No One, Metzinger defines functional rigidity as a trait of certain classes of conscious experience, mostly related to basic survival. If you are starving, fearful, or otherwise insecure in your basic livelihood, these 'contents of consciousness' will be more difficult to exert mental autonomy over than less-survival centric contents. They're more rigid features of consciousness. I can decide to think about an elephant rather than a cat, but I cannot so easily decide to cease thinking about my inability to feed my family, or secure the roof over our heads.
So, one way to increase the temporal dimension of a citizenry's cognitive horizon is to reduce the difficulties associated with meeting these basic - dare I say, economic - needs.
...
II. Improving the adaptive value of counterfactual thinking by loosening predictive biases and increasing flexibility
If cognitive horizons are powered by counterfactual processing, they are subject to the same hazards. As is well established, cognitive biases constrain counterfactual thinking, trapping cognition in habitual loops that may be less than helpful.
Cognitive biases are thus a constraint, not on the temporal dimension of cognitive horizons, but on their flexibility, richness, and range. If short-term goal satisfaction helps extend the length of the counterfactual tree, reducing cognitive biases helps extend its width. For any given temporal cross-section, we can imagine more possibilities, as cognition is less constrained by acquired habits.
Laukkonen & Slagter study meditation as a direct intervention, 'pruning the counterfactual tree' by down-regulating the importance ascribed to arising predictions. This affords a less involved relationship with our habitual predictions that arise first, allowing us to patiently attend to predictions other than these first-responders.
"...we propose that meditation may increase the counterfactual richness of processing outside of formal meditation by weakening ingrained prediction loops during meditation. The broadscale loosening of beliefs may permit more flexible and multidimensional processing...In general, meditators should experience less habitual grasping onto passing experience i.e., they should display a decrease of salience and stickiness of arising predictions."
Psychedelics are believed to function in similar - though not identical - fashion. Psychedelics reduce the precision-weighting (read: importance) ascribed to high-level priors, or unconscious assumptions about how the world works, and how we should habitually behave within it. By loosening these high-level constraints on predictive processing, we experience a widened spectrum of possible states.
Meditation and psychedelic offer two potential strategies, but they're by no means the only ones. The broader question: what spectrum of strategies are available to us in the project of reducing cognitive biases, so as to expand our cognitive horizons?
...
Social Agency
Mental autonomy provides a picture of agency at the level of an individual mind; cognitive horizons provide a picture of agency at the level of all living systems; what remains is to examine agency at the level of social systems.
In sociology, agency is often contrasted with structure. Agency is an individual's capacity to make free choices, independent of structural forces (class, religion, gender, ethnicity, customs, abilities, etc.). There's broad agreement that agency is this freedom to choose, but very little agreement as to what the necessary conditions are for a choice to be rightfully considered free.
Market economies are premised on this notion of voluntary exchange. Sellers are free to set their own prices, buyers are free to accept or reject. Any successful transaction implies that both parties felt it was in their interest to accept the given terms. No one forces you to buy bread, or pay rent - these are decisions you freely make in the marketplace.
It's no exaggeration to say the entirety of orthodox economics is built on this imputed notion of free choice. Here are the opening two sentences from Wikipedia's entry on voluntary exchange:
"Voluntary exchange is a fundamental assumption made by neoclassical economics which forms the basis of contemporary mainstream economics. That is, when neoclassical economists theorize about the world, they assume voluntary exchange is taking place."
At least since Karl Marx (if not Rousseau before him), heterodox economists have critiqued the idea that market transactions occur within adequate conditions to ensure that all such decisions are meaningfully voluntary.
At one extreme, such a 'voluntary transaction' is about as voluntary as forcing a wrongfully convicted inmate to choose between lethal injection or firing squad. Given a closed set of options all of which share an undesirable premise, the convict may choose. But the resultant choice fails to convey the convict's preferences for options that lie outside the closed terms of the decision. As Joseph Heller puts it in Catch-22:
"[Milo Minderbinder] raised the price of food in his mess halls so high that all officers and enlisted men had to turn over all their pay to him in order to eat. Their alternative, there was an alternative, of course—since Milo detested coercion, and was a vocal champion of freedom of choice—was to starve."
If the price system in a market economy is said to convey information about people's preferences, it remains blind to preferences that lie outside the boundaries of given economic conditions. Such a price system is incapable of conveying preferences that reject the system's parameters, since in the end, everyone must accept some market offering through which they pay rent and buy groceries. Or, they could always just starve.
Political philosopher Karl Widerquist writes that truly voluntary decision-making requires an exit option, "a reasonable alternative to participation in the projects of others." Before private property enclosed all the Earth's land, it may have been reasonable to reject all existing market offers, and instead settle one's own land, grow one's own food, and provision for one's own life, without entering into labor contracts. Today, there is no land left to settle (space notwithstanding). Reasonable alternatives to accepting one of the labor contracts on the market are hard to come by.
Widerquist concludes that a basic income guarantee is the optimal way of provisioning a real exit option to all members of society, and thus shoring up the foundations of the market economy. Unconditionally providing every citizen with access to enough resources that they may meet their basic needs, no matter what decisions are made, or left unmade, in the marketplace.
Where Metzinger's mental autonomy made a vague reference to social institutions, social agency, in present civilizational conditions, explicitly requires economic institutions to foster and uphold agency. There's much room for debate to be had over the best methods for provisioning these 'means of agency'. One may prefer a system of basic services over basic income, more localized schemes of community provision, a job guarantee (though I don't believe a job guarantee satisfies the condition of offering a true exit option. One must still assent to the job being offered, or starve), or a number of other possibilities.
But I don't think Widerquist's exit option goes far enough. Social agency is not a binary feature to be checked off with a single policy. It's a project that lives as long as a civilization does, a cascading phenomenon with no endpoint.
The great political economist Henry George writes that human progress itself "goes on as the advances made by one generation are...secured as the common property of the next, and made the starting point for new advances."
As I've suggested throughout this essay, I take the heart of human progress to be the improvement of consciousness. On this perspective, progress calls for making advances and "securing them as the common property of the next," which may be achieved by designing economic institutions that unconditionally provision resources to all citizens of that society, generating an open-ended increase of meaningful exit options.
...
Agency, Reconvened
We've explored agency at three scales, each offering unique perspectives on the umbrella project of increasing agency. Mental autonomy guides us to train agency like any other skill, teaching the young techniques for exercising second-order regulation over first-order mental content. Cognitive horizons guide us to make short-term goals easier to satisfy, and use methods of counterfactual pruning - meditation, psychedelics, therapy, and beyond - to develop less biased, more flexible, rich, creative minds. Social agency guides us to design socioeconomic institutions that provision unconditional access to resources, supporting increasingly voluntary decision making, or, meaningful notions of "free choice".
And yet, if we were to exterminate the human race via a nuclear bomb next month, or continue our patterns of stripping the biosphere of the regenerative capacity it requires to keep us alive, all this talk of experiential depth and agency would be cut short.
In that spirit, we move to the third and final principle of this consciousness ethics sketch: sustainability.
...
VII. Sustainability

III. We should foster behaviors that raise the probability for the sustained emergence of increasingly deep and agentic states of consciousness in the future
If consciousness is the most important thing in the world so far as humans are concerned, we ought to make sure we maintain the conditions for its continuation. Now, as I pointed out at the outset, it's unlikely that consciousness will persist forever. Some freak cosmic event, whether a heat death, asteroid, or whatever else, will likely wipe us all out, one way or another.
But this doesn't absolve us of acting to sustain consciousness as long as possible. We should strive to preserve all instantiations of consciousness, and wherever possible, foster its improvement, understood as opening towards higher degrees of agency and depth.
This may direct us to consider everything from our increasingly devastating relationship to the ecologies we're part of, to the - admittedly far-out - risks of artificial intelligence optimizing for a function that wipes human consciousness from existence. Traditional political ideals can also find grounding in this principle, from ending all forms of war (though especially nuclear), to global coordination problems.
...
VIII. The DAS Model of Consciousness Ethics
Taken together, these three principles form the basis of a crudely sketched, formal-ish consciousness ethics. We can call it the Depth, Agency, Sustainability model for consciousness ethics. Or, DAS model (my tongue is firmly planted in my cheek).
My hope in formalizing these principles is to make them easier to critique, which may lead to better models, which may lead to better consciousness ethics.
Other models already exist, like the Symmetry Theory of Valence (STV). Basically, STV judges how good a state of consciousness is based on the harmonics of brain activity. The more symmetric over time - or, consonant - the brain activity, the higher the positive valence, or subjectively-felt-goodness, belonging to that state of consciousness.
How might these models be refined? What others models might we create? As the complexity and integration of information continues skyrocketing amidst a rapidly globalizing, digitizing society, and the evolution of consciousness is rapidly pulled in tow, how will we make sense of what kinds of consciousness we're creating? How will we steer consciousness towards its richer, kinder potentialities?
...
IX. What Now?
What might a society unified around the ethical framework of improving consciousness be like? What decisions might we make, now, were we to adopt such a stance?
A nifty feature of a consciousness ethics is that it offers a single, collective story about what matters that nevertheless generates diversity. Answers to the above question, the actual political and pragmatic implications of a consciousness ethics, will be as diverse as the people who participate in the story.
For example, I'll offer a list of some starting points that come to my mind. Doing so, I'm well aware that you may have an entirely different list, or may even think my own suggestions are absolutely wrong. This is the sort of political discourse I want: what is and isn't good for consciousness, and why?
Anyway, some provocations:
- End government subsidies for unhealthy, highly processed foods that raise cardiometabolic risks. Subsidize foods that don't slowly gnaw away our vitality.
- Admit the war on drugs was a crime against humanity, and get to work making up for lost time in exploring how drugs - from hallucinogens to sleep drugs - might be responsibly used to improve the human experience. Pursue legalization frameworks for psychedelics, as Oregon is now pioneering (but do not limit psychedelic access to psychiatric patients).
- Reduce the degree of economic anxiety that infuses life today. Democratize capital ownership. Invest in public goods, better safety net programs, universal healthcare. Empower members of society at all socioeconomic strata to make increasingly voluntary decisions.
- Implement heavy taxes on advertising in public spaces, both meatspace and digital. While we're at it, make our public spaces beautiful again.
- Invest in biotechnologies such as neurofeedback and ultrasonic neuromodulation to explore whether they offer sustainable, accessible, efficient avenues towards improving consciousness.
- Include meditation practice in public curriculums (while we're there, we should rethink the entire passive-learning paradigm, instead favoring a more active approach).
- A zillion other possibilities.
Again, an ethical commitment to consciousness being the most important thing in the world is an inherently heterogenous commitment. This sort of commitment doesn't constrict us, but instead, drives us to unfold the manifold potentialities we've only begun to taste. It drives us out in different directions to consider every possible angle of our lives. Consciousness is implicated in every human endeavor, every domain of knowledge, every field of study, and so provides an absolutely omnivorous foundational commitment.
A consciousness ethics fosters a commitment to diversity, novelty, agency, sustainability, and exploration, while simultaneously cohering our efforts around particular principles that can be applied across each domain. It fosters growth and coherence.
If each era of society is enthralled to a particular question, a particular piece of the human conundrum they orient themselves around, like an archer closing an eye and taking aim at a target held up by their particular cultural, civilizational moment, ours may well be: what is a good state of consciousness, and how can we foster more of them?
This, to me, is a question worth living for. And should the heat death finally wipe us out, the scattered matter that was once my body will rest easy, the shards of my mind bathing in exorbitant sufficiency, fulfilled by the knowledge that it was, too, a question worth dying for.