AI is a massive, provocative, often misconceived topic, so I feel that I should explain the focus of this discussion.
When talking about AI, I am not talking about real AI, or thinking machines, or anything like that.
When I say AI, I merely mean good old “smart” machines, which may or may not, employ machine learning techniques in their algos.
The idea of combining AI in Blockchain governance is obvious.
Well, it should be to those who have read erudite genius Ursula K. Le-Guin’s book “The Dispossessed” (1974), where (* * Mini-Spoiler * *) an anarchistic Mars is governed by independent syndicates, which have their market regulated by a “computerized exchange”,very reminiscent of, you guessed it, AI. (I will not go into it right here, but just go read it ).
Blockchain in general, and Cardano in particular, herald the implementation of governance and regulation of enterprise through smart contracts.
It really is only a question of complexity imo, and I argue, that using smart contracts to these ends, will quickly give the impression that “the computer is running things” (whether machine learning elements are included or not).
Complexity is the key word here, however you spin it.
We bring in machine learning to solve problems we can’t untangle ourselves. Where it’s too difficult for our brains to even see the pattern. And since the problems we are discussing in this context come from game theory, economics, and “psychology”, I can be pretty confident when I say, we are talking about complex systems here.
I won’t go into it here, but the type of problems smart contracts will need to solve (especially in the context of governance & regulation) involve prediction, self balancing, and evolution. It’s a Machine Learning bonanza.
Regulation & Control
To draw on Le-Guin’s legacy to imagine the future, we can envision an AI used to regulate organizations, by handling (or at least advising on) - administration, division of labour, distribution of resources, internal regulation, and who knows what else.
This may sound dystopian to some, but at the heart of the idea to hand over these responsibilities to AI lies the understanding, that an AI would inevitably be much more fair, unbiased, and accurate, in meeting them, than any human based solution could.
Of course we have to remember that while we gain effectiveness, we do lose a degree of control, or several categories of control more likely. Specific decisions made by an ML algo cannot be justified, or even broken down properly.
This type of techno-existential helplessness, will necessarily change the ways we view ourselves in relation to our fate and destiny, and could cause much friction.
Once AI is used in the ways described, one could imagine the ensuing sense of lost control will have serious psychological and social ramifications, the scope of which is hard to predict.
- AI is far more effective than humans at optimizing the parameters given it, this is more or less established. The problem lies with who gets to parameterize the system and define the optimization.
In order to mitigate this problem, (to prevent a technocratic elite of developers being in effective control of society, as the only ones who understand how to “control” ML algos), we will have to design ways to make the definition and management of ML mechanisms, much more accessible to a wider audience.
Or come up with another solution to prevent this scenario (perhaps each of us will delegate our “AI policy rights” to an ML engineer friend, who knows).
What my intuition tells me is that the “citizens” will be obliged to put the “state” under tight surveillance in this scenario, if you know what I mean.
Lastly, the effectiveness of said AI will only have meaning in the context of the problems it aims to solve.
Personally, I don’t believe in universal ethics. I do not believe that deep down, or high up, in the so called “human psyche” we all have a shared sense of what is good or bad.
Some of us believe in egalitarianism, others is equal opportunity. Some believe in inequality, based on social class, religion, race, you name it. Some believe in social justice, and some in historic justice. Some believe the highest right is service, others, freedom. And on and on it goes.
This tells me two things. One, is that we really have our work cut out for us, and two, that any future AIs that enable people to self govern successfully, will need to to be of very minimalistic design. In fact, they will probably have to be based in a masterpiece of minimalism, bordering on nothing at all.
Thank you for reading! Thoughts?