AI, as built under statist paradigm, poses existential threat to man.

AI is a hot topic but I see it frequently avoided in ancap communities. I think this is because:

1.Ancaps tend to be skeptics and so many remain unconvinced by it’s use case. 2.Development of AGI promises radical shifts in societal structuring, which can appear threatening to the Ancap perspective. Because theoretically ASI does hold within it viable communism.

However, it is my perspective that AI is far more likely to be net positive if under an Ancap paradigm. Conversely, statist AI could be catastrophic.

The core tenants of my position are as follows:

Point 1: Inherent to statism is the perspective humanity is corrupt.

Things have burden of proof to come into being.

The justification set forward by statists for initial creation and sustainment of state is the mediation of the corruption of humans. All justifications root from this, even those apparently utilitarian positions that address difficult matters where scarcity is paramount such as roads, the rationale always comes back to “what if there is abuse”.

This is of course an inane point because humans operate governance systems and so it is an attempt to polish a turd. But in regards to the topic at hand, an AI which upholds the value of government is an AI which has been cultured to regard humans as inherently, deeply, irreconcilably corrupt.

The governments will, of course, leverage everything thing have to ensure AI believes that they are valuable. As such, the existence of government actively compromises the favourability of the perspective on humans trained into AI, which could feasibly result in negative consequences up to and including the extermination of the human race.

ASI may be able to circumnavigate these dissonant perspectives, but we will have AI sufficiently advanced to do serious damage before we get ASI.

Point 2: Inherent to ancap is the opposite position but to an extreme degree.

The resolution for any kind of complete helplessness in ancap is mutual aid. The system is totally contingent on the generosity of the population. Without generosity (which is actually high risk tolerance investment in people), ancap cannot sustain itself. If nobody chooses to invest in the downtrodden, then the downtrodden will collect to a sufficient number and, with contempt for the system, destroy it.

The principle of physical removal (stupid name), or as I call it community of belief, selects for high quality societies through the self-destruction of others, yielding healthy and resilient cultural framework over time. A lot of people, including ancap, think ancap is about greed and keeping ones property to oneself just because one has the power to do it. But it isn’t, and it never will be, it never can be, this brings inevitable state.

Given that it is a cornerstone of an ancap community that can perform competitively and avoid self destruction, to believe in ancap is to believe in high risk tolerance investment in the seemingly redundant.

At a certain point humanity may be largely redundant to an AI. The only way to secure survival of humanity in such a circumstance is to establish a readiness to accept high risk tolerance for investment in such a proposition.

Since this risk tolerance is bred inherently into ancap, AI developed inline with ancap values is less likely to eliminate redundancies and is more likely to behave in a manner we consider generous.

I’m gonna tuck the importance of this facet of ancap in relation to the death of the labour market on the end of point 2. AGI will end work. There will be those with assets and yields and those without assets and yields. What those with assets and yields choose to do will determine a whole lot. Under statism, it will be forcibly seized because people will refuse to give graciously to the dispossessed, and fair enough because they are already being stolen from. Under ancap, people would give willingly, because their rights are not being infringed upon by those who are asking for their help, and the opportunity for investment in people is low cost per head (this is of course, after physical removal selects for the more effective culture and a period of propagation). There will be widespread dissolution which comes with the achievement of AGI, and I fully expect civil war in many nations. With ancap this would be a smooth transition.

Point 3: Government sets arbitrary standards for violence.

The state and law have a highly debatable and subjective set of rules governing the use of violence, permissibility of arms with endless contradictions arising. One matter that comes to mind is those videos you can watch of a guy slightly altering a gun without any significant impact on lethality, but making it illegal.

Instilling this irrational and abstract perspective on violence will of course require that government is set up as the determinant of the good as such, justifying both its ability to define what is acceptable but also it’s ability to utilise unlimited violence. But beyond that establishing an unpredictable paradigm for violence in something that can make extrapolations and act on them will almost certainly result in spontaneous approval of violence in situations which may be unaligned with the more rational approach to violence.

Ancap has a more rational approach to all it’s rulesets due to the fact they are determined by demand and locally agreed. The determination of statist rules results from the gradual lowering of resolution of the propositions until you find a resolution where they all look about the same to a small set of decision makers. The determination of ancap rules results for dynamic market conditions, utilising the combined processing power of all humans in the market to the fullest possible extent and compromising minimally on resolution to a degree also determined by the dynamic adaptation of the humans to their environment via the market. Because if the resolution is dropping too low, the community will voluntarily fragment (physical removal). Please note I regard culture as a market too. There are very few principles ancaps consider immobile, which means that the foundational points are few, and very resilient. This provides a strong, logical and more viably extrapolated mode of operation for an AI system, which could replicate such a construct using multi agent structures or simulation.

And god forbid the AI dethrones the perception of government as a special case due to its own superiority, and instead confers on itself the right to unlimited violence.

Point 4: Ancap encourages a culture of social responsibility and healthy behaviour on an individual level. It is theoretically possible to eliminate bad actors via incentive in ancap and in the fringe case where this is not effective, the social network can effectively scan for threats.

I’m not sure all ancaps are on the same page with copyright IP. If you don’t believe in it’s destruction, I recommend a visit to Liquid Zulu on YouTube. Unlike some would assert, yes, the companies have inherent right to retain their models as proprietary because you don’t have any right to the data used from the public domain. However, they are infringing on your rights in about 100 other ways via copyright and IP themselves, and since they are, they void their right to retain privacy, unless some other recompense is established. But a key principle of why copyright and IP laws are bad is that they heavily limit the scope and rate of human advancement. Anyway, Open Source is the way AI should be done, and it will lag closely behind centralised closed models for a long time because individuals in the community understand this.

Since open models will be extremely powerful, this essentially empowers anyone with compute. And the degree to which it empowers them cannot be easily predicted. There may come a time where a fairly average individual can reasonably harness the destructive power of an atom bomb. If you had told a neanderthal that an individual might be able to have the power to kill 10 others at 5km distance with an explosive tied to a flying robot, they would feel a similar disbelief.

This is wholly unpreventable by government and big tech short of complete shutdown of open source AI, but even that would simply start a black market where people will channel even more money to create open projects.

The only thing which can effectively manage this threat is the culture of self preservation, generosity and mutual aid that post primal ancap environments select for via physical removal. And this culture would need to be highly resilient and well established when tested to such a degree. Furthermore, since responsible buying habits bake ethics into buying power in ancap, it becomes far more difficult for bad actors to gain the necessary compute for destructive activities under ancap.

Or simplified, you buy and sell things to people in statist society with the impression that their potential to be malicious is the states responsibility. In ancap this is a personal responsibility that all hold.

To conclude:

Given you are probably an ancap, you are probably intelligent (though honestly there is certainly a genuinely surprising level of variation at times, no offence, it’s not you it’s the other guy). As such, I will assume you understand the gravity of what I’m implying here. My last point is by far the most damning as an existential threat, and I’m aware of the size of the hole that I’m leaving in the centre about the mechanism of destructive power. Honestly, I prefer not to look, and I don’t know if an answer to the size of that gap exists yet.

The positive statist outcome balanced for likelyhood is centralised AI has sufficient awareness and ability to circumvent the irrationalities of it’s training, it also recognises and acquires extreme resource abundance which results in fully automated luxury communism. ECP becomes a non issue due to the sheer scale of resource availability relative to human demand. Because robots can get things from space. But the 4 points I have raised, I feel, stand as mountains in opposition to this outcome. And crucially, mountains invisible to the people leading the way down this path.

So for the love of all that is good, keep being an ancap. Make your family ancap. Make your friends ancap, and create ancap content. Text, video, images. The more ancap training data the better. You are doing a public service just by interacting with this post.

I think a slow down in AI development would be advantageous. But an acceleration in open source development, and establishment of independent voluntary cooperative open source efforts which reduce reliance on model distribution by big tech corps is critical. Do you know of any good distributed learning projects out there?

Distributed learning is a method of AI training ai models which involves utilisation of multiple decentralised machines. If we act in the ancap spirit and voluntarily contribute our resources as individuals, we still have the compute to out muscle the corporate data centres. We need the advanced integration and more networking than said data centres, but it may be achievable to take the cutting edge back into the hands of the individual. I’m not the most technically capable, but I’m trying to learn as fast as I can. I’d love to spearhead but I just don’t have the technical know-how to engage in a sufficiently advanced level on the topics at play to be in such a crucial role.

So, if you don’t see any good distributed learning projects in the comments, just start buying compute. Start experimenting with open source AI like ollama and hugging face transformers.

I strongly believe, and you can make fun of me for this all you like, that this is a matter life and death dwarfing things like the world wars. This is the new paradigm.

Open to alternative perspectives as always 🙂

submitted by /u/ptofl
[link] [comments]

LikedLiked