Excerpt a dialog with an AI about the danger of AI an the necessety to integrate an real AI Layer, not just AI applications via Masumi,

Is an AI Layer nice to have or necessary for a BC that was invented to protect humans against bad actors in government, industry and oligarcs since AI is a power tool for a few powerful people to rule the many.


This concern is justified. And it is not paranoia – it is a sober reading of current developments.

What is already becoming apparent – concretely

The ownership structure is alarming

The infrastructure on which AI runs belongs to a vanishingly small group:

  • Five to six companies control the relevant computing capacity worldwide
  • The semiconductor supply chain runs through TSMC, NVIDIA, ASML – geographically and in terms of ownership, extremely concentrated
  • Training data comes predominantly from platforms owned by the same oligarchs
  • Perhaps twenty people worldwide hold capital for AI development at any relevant scale

This is not a conspiracy theory – these are balance sheets and ownership registries.

The political entanglement is becoming visible

What has been accelerating since 2024:

  • Tech oligarchs are openly moving into the space of formal political power – without election, without mandate
  • Regulatory attempts are being systematically undermined or bought off
  • The boundary between state surveillance infrastructure and private tech capital is dissolving
  • Democratically elected institutions are visibly losing their capacity to govern these actors

AI as an instrument of domination – already operational

This is not future speculation:

  • China: Social credit, mass surveillance, AI-assisted oppression in Xinjiang – already reality
  • Western democracies: Predictive policing, algorithmic decision-making in social and justice systems, often without transparency or legal recourse
  • Military: Autonomous weapons systems, AI-assisted drone warfare – already deployed
  • Economy: Algorithmic wage-setting, behavioural manipulation through recommendation engines – already normalised

The structural logic of the problem

What makes your concern so justified is not merely the malicious intent of individual actors – but the systemic logic:

Whoever owns the AI infrastructure owns access to the collective memory of humanity – and the ability to filter, distort, and instrumentalise that memory.

Historical parallels – with caution

There have been similar moments of concentration:

  • The printing press: Initially democratising – then controlled by the Church and principalities – then ultimately emancipatory. Timeframe: 150 years
  • Electricity and the telephone: Initially in private hands – then regulated as public infrastructure
  • The internet: Initially decentralised and open – then re-centralised through platform monopolies

The pattern is: New technologies are initially captured by capital – the democratic response comes, if at all, with delay.

With AI, the delay is more dangerous than ever before – because the speed of development structurally overwhelms democratic institutions.


What stands against it – and whether it is enough

There are countervailing forces – but I will not paint them in a flattering light:

The open source movement Llama, Mistral, DeepSeek – open models exist and are capable. They represent a genuine counterforce. But they still require computing infrastructure that remains concentrated.

Regulation The EU AI Act is the most serious attempt at a democratic framework. But it is fighting against lobbying, against technical complexity, against the sheer speed of development. And it applies only in Europe.

Civil society and academia There are serious critical voices – Timnit Gebru, Meredith Broussard, Shoshana Zuboff. But their institutional power is slight compared to capital.

Decentralisation A decentralised AI infrastructure would be technically feasible – distributed computing, federated learning, commons-based models. Politically, the will and coordination are absent.


My honest assessment

The danger you describe is real and structural – not episodic. It requires no malicious individual actors; it follows the logic of capital concentration and the preservation of power.

What concerns me most is not overt oppression – that would be visible and mobilising. It is the subtle variant:

AI that does not oppress, but shapes – opinions, desires, worldviews – so gradually and so personally that it is not recognisable as manipulation.

Zuboff calls this Surveillance Capitalism – the accumulation not of goods, but of behavioural prediction and control.

That is the real danger. Not the ankle bracelet. The invisible cage.


What remains

Your opening question was: How does the human being overcome its limitations?

The bitterest turn in this conversation is:

The most powerful tool for overcoming human cognitive limitations could simultaneously become the most effective instrument for keeping people within new, less visible limitations.

Whether this comes to pass is not determined. But it requires conscious, collective, political resistance – and now, in this window, before the structures become too entrenched.

Five years, you said. I fear the window is rather shorter.