governance ideology
civic virtue, the state of exception, and ai-mediated human-interpretable abstracted democracy
Formally, Governance is an AI‐mediated Human‐interpretable Abstracted Democracy. It was constructed as an alternative to the Utilitarian AI Technocracy advocated by many of the pre‐Unification ideologues. As such, it is designed to generate results as close as mathematically possible to the Technocracy, but with radically different internal mechanics.
The interests of the government's constituents, both Human and True Sentient, are assigned to various Representatives, each of whom is programmed or instructed to advocate as strongly as possible for the interests of its particular topic. Interests may be both concrete and abstract, ranging from the easy to understand "Particle Physicists of Mitakihara City" to the relatively abstract "Science and Technology".
Each Representative can be merged with others—either directly or via advisory AI—to form a super‐Representative with greater generality, which can in turn be merged with others, all the way up to the level of the Directorate. All but the lowest‐level Representatives are composed of many others, and all but the highest form part of several distinct super‐Representatives.
Representatives, assembled into Committees, form the core of nearly all decision‐making. These committees may be permanent, such as the Central Economic Committee, or ad‐hoc, and the assignment of decisions and composition of Committees is handled by special supervisory Committees, under the advisement of specialist advisory AIs. These assignments are made by calculating the marginal utility of a decision inflicted upon the constituents of every given Representative, and the exact process is too involved to discuss here.
At the apex of decision‐making is the Directorate, which is sovereign, and has power limited only by a few Core Rights. The creation—or for Humans, appointment—and retirement of Representatives is handled by the Directorate, advised by MAR, the Machine for Allocation of Representation.
By necessity, VR Committee meetings are held under accelerated time, usually as fast as computational limits permit, and Representatives usually attend more than one at once. This arrangement enables Governance, powered by an estimated thirty‐one percent of Earth's computing power, to decide and act with startling alacrity. Only at the city level or below is decision‐making handed over to a less complex system, the Bureaucracy, handled by low‐level Sentients, semi‐Sentients, and Government Servants.
The overall point of such a convoluted organizational structure is to maintain, at least theoretically, Human‐interpretability. It ensures that for each and every decision made by the government, an interested citizen can look up and review the virtual committee meeting that made the decision. Meetings are carried out in standard human fashion, with presentations, discussion, arguments, and, occasionally, virtual fistfights. Even with the enormous abstraction and time dilation that is required, this fact is considered highly important, and is a matter of ideology to the government.
—To the Stars
Authoritarianism vs republicanism is one of the timeless debates of political philosophy. It remains a central question in our time, plagued by stagnation and wracked by upheaval, as we seek a way forward for our society that safeguards human values while also enabling effective action.
How can you get the decisiveness, speed, and vision of a young and capable autocracy, without succumbing to the inevitable excesses, abuses, and bloodletting once its supreme leader is no longer both unchallenged and enlightened? And how can you have the responsiveness to public will and sensitivity to the common good of an ideal republic, without the sclerosis, obstructionism, influence-peddling, and demagoguery that turns government into a quagmire, one too often only escaped through dictatorship?
Governance—the mostly benevolent leviathan that rules over interstellar humanity in To the Stars—is a creature of the tech level of its setting, replete with friendly AI and transhuman augmentation. The story presents a fascinating vision of government run by computation that actually feels human, in that it centers human moral intuitions and human values, instead of succumbing to the rationalist-tinged dream of a heartless yet logical totalitarian utility function. Governance attempts to capture as much of the efficiency and effectiveness of artificial intelligence as possible, while remaining responsive and beholden to the interests of individual humans and humanity as a whole.
Stripped of its sci-fi trappings, Governance is also a commentary on authoritarianism vs republicanism. It is, in essence, a careful balancing act between these two styles of power. On one end of the scale, the effectiveness of pure command. And on the other, the restraining influence of constituencies advocating for their particular interests. It captures as much of command as it dares, without losing sight of the critical need for restraint. While the technology of the setting is grandly utopian, the political concerns and human factors are very grounded and realistic, and this makes it worth contemplating seriously.
The fluid nature of constituencies—in contrast to our world of strictly geographical representation plus affinity- and interest-based activist and lobbying groups—enabled by tech as they are, is quite aspirational. Decisions are made by representatives on the basis of what is necessary and good for their own constituencies. Because of their agglomerative nature, representatives can actually take into account all the stakeholders in a debate, from all the angles they may be viewed. The ability to merge representatives into single consciousnesses is crucial for defending the interests of all those represented without deadlocking when those interests must be traded off against.
After a decision is reached, the organization Governance acts in harmony with itself to implement it. And those decisions are debated and adjusted as time goes on, with the continuing input of representatives for the people and organizations they'll ultimately affect.
If we ever have the technology to evolve past republican democracy, into something more democratic, the plausible forms to me are digital direct democracy and this sort of fractal representation. I would hope for the latter. Greater public involvement in the minutiae of policy can only harm the ability of government to get anything done. The ideal to pursue is not for the public to take on the work of specialists. Rather, it is to guide the specialists toward following the best interests of the public.
Governance harnesses technology to speed decisionmaking and smooth out complex coordination. Human representatives are heavily modified to be able to collaborate with sentient AI and sub-sentient computer programs as fast as hardware allows. But Governance also spends much of that theoretical efficiency as currency, to buy things that are more valuable. Much like in programming, if your underlying hardware doubles in processing power, you can give up some slack to make things more pleasant to work with while still realizing much of the jump in speed.
Most importantly, and most relevant to those of us not living in a universe with cyborgs and AGI, Governance is absolutely constrained by inviolable moral principles and axiomatic rules. Moral principles emerge from the fundamental dignity of humankind, which can only be taken on faith, whereas artificial axioms mean to preserve an acceptable standard of "human" and defend the ability of Governance to govern. The distinction between the two classes is valuable and highly instructive because they tend to be conflated in political discourse as things of the same kind we're meant to accept as given. When not simply sloppy, this is often done to lend unearned moral weight to rules that are more a matter of practical necessity.
The "core rights" are the moral dimension of Governance, not explicitly enumerated in-story, but include such things as freedom of religion and the right of exit from quasi-independent colonies, abusive families or organizations, and so forth. They are sacrosanct and unquestionable except in the event of existential risk to humanity itself.
On the other hand, "Governance ideology" is the axiomatic component, principles that are adhered to because they advance a useful interest. Certain kinds of biomodding and cyborgization are allowed for convenience, practicality, and safety, but anything that threatens to alienate one too far from baseline human experience is banned. Interstellar transportation and communications bandwidth are restricted for the sake of preserving cultural diversity in the colonies, after Earth became a monoculture and Governance feared a loss of robustness. Governance goes to great lengths to remain human-interpretable, at the cost of some efficiency, because the ability for people to understand government, even if largely theoretical, is a good in itself.
The laws of God and the laws of man, although natural law does get difficult without a god to pin it to. Of the core rights, I can find little fault. Even if we recognize them as aspirations imperfectly applied, even if we accuse them of being human contrivances, the concept of the rights of man is so ingrained in us and so healthy to human flourishing that questioning their merit feels like mere contrarianism.
Governance ideology is a different matter. Its axioms are defined by circumstance and utility, and may be up for revision as situations evolve. Indeed, in the story itself, it's shown that Governance's harsh restrictions on extreme biomodding are already secretly being violated due to the exigencies of war with an alien species.
Core rights are universal, but Governance ideology is particular, and I especially like the turn of phrase because it represents a non-partisan form of ideology: the playing field, the bounds of discourse. If "human rights" are our universal, then perhaps "liberalism" is our particular. But it may be up for revision.
Governance, again, exists to balance command against restraint, decisiveness with sobriety. All real government, democratic or not, performs this balancing act in the same manner. You need to be able to act, and you need to be able to constrain. When action becomes rash or harmful, new constraints are invented to rein in those actors. And when constraints become too heavy, the people long for them to be lifted, for someone to at least do something.
Much of what's wrong with American government comes down to a failure of this balance. Congress finds it impossible to act, so hacks are implemented to route around the damage, resulting in a system with fewer checks, greater opacity, and more capriciousness than if it simply functioned normally.
The filibuster, once extraordinary, is now routine, so federal appointments can't be made, so certain kinds of appointments are made immune to this check. Passing 12 appropriations bills every year got too hard, so they're bundled into omnibus bills, which attract all kinds of riders that would never pass on their own even in good order. The omnibus bill must pass, so it can also be made filibuster-immune through budget reconciliation, but then sometimes the majority uses reconciliation to force through partisan legislation expecting that the budget will have to pass anyway.
The official means for federal bureaucracy to define rules with public input became a target of activist obstruction and lobbyist wheedling, so now Congress grants agencies the right to declare rules by fiat. With no legislative base of power, both Obama and Trump resorted to executive order to implement their agendas, resulting in a nightmare for anyone affected as rules toggled on and off arbitrarily as challenges, stays, and injunctions pingponged through the court system. And so forth.
From this perspective, the dichotomy between republicanism and dictatorship gets rather messy. Pure autocracy may have no legal checks on its power, but it's still subject to economic and military constraints, public exhaustion, public revolt, palatial coup. And any representative democracy, no matter how many checks exist, in the end must have people who act, on their own will, for anything to be done. In fact, past a certain point, a profusion of obstructive checks may enable or even force the violation of the ones that are truly important.
But a distinction can still be made! Purveyors of doctrine may look at the morass and long for a gleaming sword, but the sword cuts all in its path. This is the essence of nebulosity and pattern: you can see the underlying commonalities between two things without resorting to equating them.
The rights of the citizenry enshrined in law. Internal checks, such as separation of powers, an independent judiciary, and civilian control of the military, which obligate or incentivize parts of the government to oppose excesses of others. External checks, foremost at the federal level being election by secret ballot. These are the distinctions.
None of these mechanisms are perfect, many cause problems that could be solved, and many are in fact in shambles. We don't need to commit to any particular mechanisms. Not even voting is truly sacred; it is a matter of our governance ideology.
What actually matters is the ability of the sovereign to act, constrained by limits on those actions, such that they must serve the many, while protecting the core rights of all. That is a matter of moral principle.
The stated intentions of the structure of the government are three‐fold.
Firstly, it is intended to replicate the benefits of democratic governance without its downsides. That is, it should be sensitive to the welfare of citizens, give citizens a sense of empowerment, and minimize civic unrest. On the other hand, it should avoid the suboptimal signaling mechanism of direct voting, outsized influence by charisma or special interests, and the grinding machinery of democratic governance.
Secondly, it is intended to integrate the interests and power of Artificial Intelligence into Humanity, without creating discord or unduly favoring one or the other. The sentience of AIs is respected, and their enormous power is used to lubricate the wheels of government.
Thirdly, whenever possible, the mechanisms of government are carried out in a human‐interpretable manner, so that interested citizens can always observe a process they understand rather than a set of uninterpretable utility‐optimization problems.
The success of the government in achieving these three goals is mixed.
The balance of action and constraint leads to another balance I consider to be central: that of judgment and procedure. This dichotomy is at the center of many political issues because it is a difficult and fundamental tradeoff.
The pure judge is an individual empowered to make whatever decision he sees fit. He answers to no one but the higher courts and his own understanding of law. This is embodied for me in the rachimburgs, the judges of the pre-literate Franks. As Michel Rouche describes: "These men learned each article by heart and memorized the most recent decisions, which created a body of precedents. Living libraries, they were the law incarnate, unpredictable and terrifying. Simply let a judge pronounce in Old High German the words 'free man maimed on the grass,' for instance, and sentence was automatically passed."
At the opposite end of the spectrum is the perfect bureaucracy. The humans who work in the system, ideally, have no ability to make decisions whatsoever. They exist only to enact the process, which is dictated by a strict set of rules meant to cover all cases. Complex, but deterministic: given the same inputs, the ideal bureaucracy always produces the same outputs.
Things are never so simple. We all know that in practice, bureaucrats have more or less latitude to make exceptions and break the rules. And judges are hamstrung by policies and statutes that constrain how they can rule and what sentences they're allowed or obligated to hand down. Rather, the point is to illustrate a tradeoff surface.
Decisions made with judgment may offend the sensibilities of others with different standards, so rules are put in place to enforce certain levels of punishment or clemency. And these tie the hands of later decisions even when common sense says the differing facts merit a different penalty. Likewise, procedure cannot cope with unexpected cases, so either new rules need to be added to the maze, or its agents need to be given some leeway to make sensible decisions. But now the procedure isn't quite so knowable, whether because it's gotten too complicated to understand, or too much is up to whatever clerk you happen to get.
Both of these balances are instances of something much more primal: rigidity vs flexibility, pine and bamboo. Standing strong against the wind, or bending with its contours. The picture is completed with the addition of plum: beauty, beauty in the face of adversity.
The Emergency Modes of Governance are designed to operate the government, the military, and Human society with progressively greater degrees of efficiency, but at a considerable cost to societal conventions, civil liberties, and government ideology. As such, they are only invoked in the direst of emergencies, and only the lowest level has ever been activated.
Emergency Mode Level One is a full Emergency Session of all existing Governance Representatives, ensuring that every Representative is devoting at least some computational time to the problem at hand. It was last invoked after the attack on Aurora Colony, and was canceled three weeks after New Athens.
Emergency Mode Level Two, called by a majority vote of the Level One Session, causes the merger of all members of the Directorate into the super‐Representative Governance, containing in its consciousness every Human and AI representative, as well as every advisory AI and the majority of military AIs. This merged AI would hold supreme sovereignty, actualizing the AI technocracy that the current government is meant to imitate.
It is presumed that Level Two would only be invoked upon an imminent invasion of Earth. No one is quite sure what it would look like, and philosophers debate whether such an AI would be closer to a Supreme Dictator or a Philosopher‐King.
Emergency Mode Level Three can be called by Governance. Every citizen is mobilized into the military, and a direct two‐way interface is opened between every citizen's brain and the nearest computing network, allowing the transmission of orders and relay of information. It should be emphasized that these orders do not exert any compulsory effects, and are simply orders. At this point, the super‐Representative becomes stylized Humanity. The Core Rights are suspended, and the government recovers its powers of execution, summary imprisonment, and so forth.
Level Three has never been invoked, and it is expected that it would only be invoked upon the actual invasion and imminent loss of Earth.
Emergency Mode Level Four can only be called with the direct approval of ninety percent of Human citizens and AIs. It involves the permanent activation of civilian Emergency Safety Packages and, essentially, the mechanization of all Human interaction. While directives are still non‐compulsory, the obvious and terrifying dystopic implications of Level Four lead to the expectation that it could only happen upon the imminent destruction of Human civilization.
A catch that has always bothered me is that there must be constraints on the power of the ruler, but there is no human mechanism that can enforce them. If the ruler is paramount, he can rewrite the rules as he sees fit, if not outright violate them. And if he is not paramount, and there is some other entity that can extract his compliance by force, then that entity is the true ruler. It's self-defining, just as Schmitt said: the sovereign is the one with the ability to declare a state of exception.
Of course Schmitt is exactly the kind of vulgar ru we have no use for. How blackpilled is he! Just because such a position is achievable doesn't make it defensible, and certainly it should not invite praise. In America we worship not Caesar but Cincinnatus.
Separation of powers is interesting because it tries to solve the problem through conflict. "Competing institutions jealously guard their own power and conspire to hobble the others" is a perfectly cynical equivalent to the more high-minded explanation we get in civics class. In practice, the system we have is hit or miss. The Founders feared Parliament above all, and designed their new system assuming that Congress would abuse its power rather than shirk it.
But there's a broader idea here. If some principle is violated, there are other entities that will enforce the penalty against each other. It doesn't just come down to three branches, but all the various individuals in a sufficiently interconnected system being willing to act against those who transgress it.
This is why, for instance, a judge can sentence a man to death, but can't shoot that same man on the street. Or why an accomplice in a crime might turn in the perpetrator in exchange for a plea deal, or why a loyal party member may denounce a heterodox comrade in hopes of forestalling his own reeducation. It's also why coups are so touchy while in motion: a group swoops in and attempts to secure as many of the organs of state as possible, illegally, in the hopes that others will take it as a fait accompli and consent to legitimize the changing of the guard. It's never truly unilateral.
This is also why—if you're ever planning to execute a coup yourself—the basic formula is to control the military, the judiciary, and the media. You want the power of violence to compel action, the hand of the secular priesthood to sanctify it, and the ability to tell people about it. Everyone else will fall in line.
In other words, there exists "the way things are done." People know it themselves, and they expect things to follow it in good order. They also know that others have similar models in their own heads. Thus a group of people becomes a mob, ready to enforce its norms on the sixth monkey at any time, and that fear keeps them all in check.
In a complex enough system, no one is really paramount. This doesn't carry any inherent valence, good or bad. Whether it's good or bad comes down to the individuals, the transgression, the system, and the fruits.
Bronze Age Pervert remarks that no Western potentate, no matter how wealthy or influential, has the simple power enjoyed by any true strongman to kill a rival and steal his wife. This is true, and it's because the system is what's in charge. The snake eats its own tail.
We might call the general principle "political autoregulation." It shows up in history in various forms, generally informal coordination mechanisms, or simple exertions of power from below. The Mandate of Heaven encouraged a bi-stable system where everyone should revolt against an unfit ruler, and everyone should throw their lot in with a successful enough challenger. Excommunication allowed some popes to incite rebellions against impious rulers, although it was more often used against the troublesome. The plebians on five different occasions won concessions and checked their patrician overlords by following the timeless wisdom of "hit da bricks... you can just leave!" The Praetorian Guard sealed the fates of Caligula and Nero, serving as something of a check on imperial excess until they realized they could use their station to juice their comp.
The other means of restraining the power of the ruler, whether chancellor, court, or king, has been scruples. Natural law. Fear of the wrath of God or the judgment of Heaven. Respect for human rights and adherence to the laws of war. Res publica.
Natural law was perhaps the greatest regulatory mechanism ever invented, and it's kind of a shame that we lost it to legal positivism. But the fact that I can even write such a sentence means there's no hope for it or anyone who would try to revive it. It depends on questioning it being unthinkable, in the literal sense. Once you've pulled back the curtain, there is no more great and powerful Oz.
I realized a few months ago that without even trying all that hard I actually became a Confucian. I don't think it's possible to design an incorruptible system. Any aristocracy of merit will naturally seek to perpetuate itself, eroding its founding principles. Whether of the sword, robe, or coin, any class that takes on duty in exchange for privilege will eventually arrange things such that they can pass that privilege down their line. Systems may be set up to inhibit this, but they will always be subverted.
The only thing to be done, then, is to cultivate virtue. Civic duty, high social trust. Honor, brotherhood, solidarity, mission. Whatever form it takes, when you have the magic that makes people want to coordinate, everything else flows from it. Every successful attempt at reform or nation-building has started with rulers and ministers who cared deeply about the outcome, for its own sake, and pursued it together because it mattered to them.
Washington and Lincoln accomplished what they did because of a deep sense of duty, purpose, and stewardship of their nation. Fan Zhongyan and Wang Anshi were driven to remonstrate with their lords because they felt responsible for the outcomes of things. Gustavus Adolphus rode into battle with Grotius at his breast. Abaoji defended the ways of his people, the ways that kept them strong.
In the words of Master Xun, "There are chaotic lords; there are no states chaotic of themselves. There are men who create order; there are no rules creating order of themselves" (12.1). This expresses my meaning.
To a past observer, the focus of governmental structure on AI Representatives would seem confusing and even detrimental, considering that nearly 47% are in fact Human. It is a considerable technological challenge to integrate these humans into the day‐to‐day operations of Governance, with its constant overlapping time‐sped committee meetings, requirements for absolute incorruptibility, and need to seamlessly integrate into more general Representatives and subdivide into more specific Representatives.
This challenge has been met and solved, to the degree that the AI‐centric organization of government is no longer considered a problem. Human Representatives are the most heavily enhanced humans alive, with extensive cortical modifications, Permanent Awareness Modules, partial neural backups, and constant connections to the computing grid. Each is paired with an advisory AI in the grid to offload tasks onto, an AI who also monitors the human for signs of corruption or insufficient dedication. Representatives offload memories and secondary cognitive tasks away from their own brains, and can adroitly attend multiple meetings at once while still attending to more human tasks, such as eating.
To address concerns that Human Representatives might become insufficiently Human, each such Representative also undergoes regular checks to ensure fulfillment of the Volokhov Criterion—that is, that they are still functioning, sane humans even without any connections to the network. Representatives that fail this test undergo partial reintegration into their bodies until the Criterion is again met.
I hope to see man-machine symbiosis in my lifetime. On the whole, I find myself skeptical of AGI, and dismissive of the alignment problem. Over the past year, I've been shocked by the utility of LLMs, and incorporated ChatGPT into my daily functioning. But I still think, when this is said and done, we'll have much better tools for human hands, not independent, superhuman sentience.
I'm not sure what effect LLMs will really have on knowledge work. They can filter through and condense data with great adroitness, but they can also produce more of it than anyone can consume. I have a pet theory that personal computing and the internet have not had as large an impact on GDP as one might hope because the large productivity gains made by automating clerical labor are offset by our ability to generate vastly more need for it. This is my default LLM case, albeit an uninspiring one. Maybe Lycurgus was right after all.
But sci-fi visions like "AI-mediated human-interpretable abstracted democracy" should be aspirational. After all, Snow Crash directly led to the creation of Google Earth and the Oculus Rift. If text is to be the universal interface, surely this model of cyborg government is a more fertile vision for a future than easier API integrations, bloodless personal assistants, and automated spam.
There is something I've consistently failed to articulate to my satisfaction about what it feels like to "collaborate" with ChatGPT. Whenever I talk about how I use it, people always ask how I avoid being misled by confabulation. I don't have any problems with confabulation, and haven't since about a month after I started using it, but I couldn't quite describe why that's the case. I would say things like, you need to be "critical" with it, you need to "engage in dialogue," but this didn't capture what I really meant.
What I mean is this: when I talk to ChatGPT, I'm rarely asking it for information as a shortcut to web search. I only use it this way for answers I already know enough to evaluate, or for tasks I can trivially sanity-check. Most of my interaction with ChatGPT is a robust back-and-forth, where I'm using it as an aid to my own thinking rather than simply a source of truth.
Studying math, for instance, I'll paste (or more recently screenshot) content from textbooks, and ask for explanations of the material in front of it. I'll talk through my own understanding, and have it verify or refute. I'm looking for confirmation of conclusions I've reached on my own, or extrapolation and context for what I'm beginning to grasp. I'll analogize to other fields, make logical leaps to deeper abstractions, and ask if my inferences check out. If I'm not satisfied with my understanding of something, I'll keep asking, and keep working at it, until I know that I've got it.
In other words, I don't use it much like a chatbot and I don't use it like an encyclopedia. It isn't just a thing I talk to so much as an extension of my own thoughts. It's a peripheral that I offload processing and recall to. It is part of my mind.
That's where I think this all goes. Neo-bicameralism. We will talk to another voice in our own head until a new form of consciousness is born.
If the United Front could successfully rebuild the world, its directors hoped to use the gratitude of the populace to entrench their ideology and successor government forever. To this end, on top of its ambitious rebuilding objectives, the Council promised grandiosely to construct Eudaimonia on Earth, promising to make the Future dreamed of by Humanity real, and to change the human condition forever.
Nothing was off the table. On a small scale, congestion‐free streets, universal augmented reality, easy air travel, an end to violence and crime. On a larger scale, the Council inaugurated a set of projects ambitious both in scope and name, intended to be Manhattan Projects for a new age: Project Eden sought clinical immortality, Project Janus sought FTL travel, and Project Icarus sought to use solar satellites to harvest the light of the sun, making energy not just cheap, but free. With these accomplishments, the Council sought to win eternal loyalty from its citizenry.
Finally, the Council sought to remake government. The Council sought to make a government provably aligned with the populace's interests, indivisible, and so amorphous as to be unassailable. There would be no personality, no princeps, only Governance.
Your writing is some of the best I've ever come across and I hope you keep going!
I truly can't find what "To The Stars" refers to here, interested in reading the source material but haven't been able to find an author name or anything else on google - if you could link me it would be much appreciated!
Edit*: unless it's this: https://archive.ph/K2pI4, in which case I leave it for future readers
What a fine and tapestried essay. You have made the prospects of ai mediated political futures feel real and yes alluring.