Let's talk about Blockchain Governance and AI

Hey All!


  • AI is a massive, provocative, often misconceived topic, so I feel that I should explain the focus of this discussion.

  • When talking about AI, I am not talking about real AI, or thinking machines, or anything like that.

  • When I say AI, I merely mean good old “smart” machines, which may or may not, employ machine learning techniques in their algos.


  • The idea of combining AI in Blockchain governance is obvious.
    Well, it should be to those who have read erudite genius Ursula K. Le-Guin’s book “The Dispossessed” (1974), where (* * Mini-Spoiler * *) an anarchistic Mars is governed by independent syndicates, which have their market regulated by a “computerized exchange”,very reminiscent of, you guessed it, AI. (I will not go into it right here, but just go read it :smiley: ).


  • Blockchain in general, and Cardano in particular, herald the implementation of governance and regulation of enterprise through smart contracts.
    It really is only a question of complexity imo, and I argue, that using smart contracts to these ends, will quickly give the impression that “the computer is running things” (whether machine learning elements are included or not).

  • Complexity is the key word here, however you spin it.
    We bring in machine learning to solve problems we can’t untangle ourselves. Where it’s too difficult for our brains to even see the pattern. And since the problems we are discussing in this context come from game theory, economics, and “psychology”, I can be pretty confident when I say, we are talking about complex systems here.

  • I won’t go into it here, but the type of problems smart contracts will need to solve (especially in the context of governance & regulation) involve prediction, self balancing, and evolution. It’s a Machine Learning bonanza.

Regulation & Control

  • To draw on Le-Guin’s legacy to imagine the future, we can envision an AI used to regulate organizations, by handling (or at least advising on) - administration, division of labour, distribution of resources, internal regulation, and who knows what else.

  • This may sound dystopian to some, but at the heart of the idea to hand over these responsibilities to AI lies the understanding, that an AI would inevitably be much more fair, unbiased, and accurate, in meeting them, than any human based solution could.

  • Of course we have to remember that while we gain effectiveness, we do lose a degree of control, or several categories of control more likely. Specific decisions made by an ML algo cannot be justified, or even broken down properly.
    This type of techno-existential helplessness, will necessarily change the ways we view ourselves in relation to our fate and destiny, and could cause much friction.

Once AI is used in the ways described, one could imagine the ensuing sense of lost control will have serious psychological and social ramifications, the scope of which is hard to predict.

Achieving Fairness

  • AI is far more effective than humans at optimizing the parameters given it, this is more or less established. The problem lies with who gets to parameterize the system and define the optimization.

In order to mitigate this problem, (to prevent a technocratic elite of developers being in effective control of society, as the only ones who understand how to “control” ML algos), we will have to design ways to make the definition and management of ML mechanisms, much more accessible to a wider audience.

Or come up with another solution to prevent this scenario (perhaps each of us will delegate our “AI policy rights” to an ML engineer friend, who knows).
What my intuition tells me is that the “citizens” will be obliged to put the “state” under tight surveillance in this scenario, if you know what I mean.

Defining Fairness

  • Lastly, the effectiveness of said AI will only have meaning in the context of the problems it aims to solve.

  • Personally, I don’t believe in universal ethics. I do not believe that deep down, or high up, in the so called “human psyche” we all have a shared sense of what is good or bad.

  • Some of us believe in egalitarianism, others is equal opportunity. Some believe in inequality, based on social class, religion, race, you name it. Some believe in social justice, and some in historic justice. Some believe the highest right is service, others, freedom. And on and on it goes.

This tells me two things. One, is that we really have our work cut out for us, and two, that any future AIs that enable people to self govern successfully, will need to to be of very minimalistic design. In fact, they will probably have to be based in a masterpiece of minimalism, bordering on nothing at all.

Thank you for reading! Thoughts?


That gave me a lot to think about thank you.

1 Like

Certainly thought provoking, initial reaction of thought processes with synapsis firing off in multiple directions.

AI and ML is still in its infancy and there’s much debate over some of the points raised here which you will not get any argument from this angle from myself.

Even the word 'fair’ is a variable

Putting AI and ML together could be construed as a real decision making thing, do you think so?

There is plenty to think about from what you wrote and I’ll have to come back at some point.

Thanks for the thought provoking post. Awesome

1 Like


It already is, sort of. Or more accurately, I don’t think that’s the right question.
Let me take you back to the Matrix Trilogy if you will, to one of its less glorious scenes specifically.
To summarize, “we are in theoretical control”, but we can’t in effect, in reality, “shut down the machines” anymore.
And once you reach the point of “The Big Red Button”, where the only meaningful thing you can do (and you can) if something goes wrong is bring everything to a halt, then your only options are to say “Yes” to the machine or abstain.

You may say ML heralds the arrival of The Big Red Button era, but from my experience, in most systems that matter, we are already there.

General AI, machine sentience, a “decision maker” as you call it, is merely a theoretical exercise at this point in time, imo.
“Decision Supporters” though? Already abound. I think you get my drift :wink: .

1 Like

machine learning is a subset of artificial intelligence.

i find that this has always been the case, though never has it been ever so deliberately and meticulously mathematically/logically defined.

while i see what you’re getting at, with people’s perception of only what is objectively defined can be true, real or what fair can be delineated from. I believe that this will never be the case because everything will ultimately be whittled down to the essence of or rather some variable of the trolley problem. where subjectivity will be a deciding factor, if not the critical one.

whilst this is an interesting topic, i think it’s important for those who are familiar with the field to take this as an opportunity to educate on existing concepts and problem spaces. one of the simple ways would be to use already established vocabulary.

i believe what you’re referring to here is the ai control problem

here i believe you’re referring to agi (artificial general intelligence)

1 Like

Thanks for your suggestions @misteraxyz!

I’ve amended the GAI part, but kept “The Big Red Button” (with a link :slight_smile: ), because it really goes beyond AI. At a modern atomic reactor, the computer is either doing its job consistently, or you are closed for business…

1 Like

Not necessarily :vulcan_salute:

1 Like

ok, please explain?

It’s all buzzwords! :smiley:
In the 60s they used AI properly to mean General AI, because they didn’t know the stepping stones, if you will.

Today, those who know about Machine Learning often refrain from using the term AI.
Those who don’t, will use AI for pretty much anything, and you have to admit, it’s much catchier.