There will always be someone raising the false alarm but protection out of fear of overreaction imagined or otherwise more often leads to its abuse than providing protection of any kind.
One of the most famous examples of history: the death sentence of Socrates. We also have many examples from the totalitarianisms of the past and those of today. When state security is confused with political propaganda.
Sorry, it’s very very obvious that such things have and do happen. But where’s the evidence that protection more often leads to abuse? Isn’t this just an assumption?
Right, we have no rigorous evidence on such a complex issue or evidence to the contrary. However, this does not mean that it is not dangerous to give too much discretion to a supervisory institution. If someone can abuse a power, they will. I would like to mention the Stanford prison experiment, but it has received several criticisms regarding its scientificity. Perhaps the Milgram experiment remains the best example of how authority can be abused.
Damn, I used a truism in a conversation … I’m a bad person. At least it’s not a logical contradiction.
However, I’m not the only one to use rhetoric arguments.
Of course, the issue is all about the right level of discretion, who should exercise it and under what circumstances. Certainly the question cannot be reduced to a single dimension and to only two values.
I don’t want to give the impression that I am totally against information control. Not my topic. I was talking about the how, the who and the why. I share Omshivaya’s concern about the abuse that someone can make of controlling information in the name of protection. I am of the opinion that a good starting point is to share control among as many independent entities as possible.
I don’t want to give the impression that I am totally against information control.
I tend to react when I see something that seems to say simply “censorship is wrong”, but of course people very often hold more nuanced positions than can be expressed briefly online.
I am of the opinion that a good starting point is to share control among as many independent entities as possible.
I’d like to see some pilot projects exploring the potential of such ideas.
I’d like to see some evidence for that, the reasons I’ve seen concerned public health, public order, and defense of democracy.
Edit: I got distracted by that particular issue and lost sight of the larger one: when I said I had no idea authority could be abused (with a grinning smiley), did you take it seriously?
However, I take this opportunity to clarify that with
too much discretion
I actually meant a lot of discretion. So, it wasn’t a truism in my intentions, because it is possible to imagine a benevolent dictatorship. Sorry, but English is not my native language.
Sorry, I must have used the wrong reply button, or it would have indicated who it was to in the top-right corner. But I was replying to the person I quoted.
I actually meant a lot of discretion .
Not sure that makes a big difference, TBH. I’m happy to accept it’s all about matters of degree, my main beef is with those who say it’s simple.
If we are going to create an uncensored social media community, couldn’t we also have an option to turn on with a single click, the censored (community moderated) version? This could provide a usable platform that isn’t swamped with ads, spam, hate, etc… but if we want to turn off the moderation, then the hidden comments and posts would re-appear. What would be the consequences of having user defined moderation preferences? I support free speech and strongly oppose censorship 98% of the time. However, having zero moderation will inevitably lead to an unproductive platform that I have no interest in using. Often the loudest online voices, and those who post the most frequently are not the ones I want to listen to or engage with. I think some form of moderation will always be necessary, and I prefer an approach that puts the moderation in the hands of the community, and also has some individual user driven preferences.
Why can’t it be a continuum instead of on/off? Moderation should not be the same for everyone. Similar to the following model (I want to see what you are saying) we could follow people’s timelines (I want to see what you are seeing) and blocklists (I want to block what you are blocking). Of course, how each of these are weighted would be up to the user.
Thus, content filtering could be a cooperative effort and everyone could set their personal filter level to something between 0% and 100%
Reputation and rewards could also be tied to this. I could pay someone for curating content close to the way I would.
If he is willing to bear the legal consequences of his actions, let him yell.
An uncensored social network would be the only possibility for psychohistory to be possible within a few centuries
The greatest virtue of the blockchain is the inviolability of the ledger.
We cannot break its main characteristic, intervening in its records, unless it is not blockchain.
Not like the sources of history that have survived to this day, suffering pure and permanent intervention and manipulation of reality by each other.
Forcing historians to carry out a colossal work of investigation.
Social networks write an important part of the history that will be studied in the future.
As well as they are the raw material for the development of psychology, sociology and neurosciences.
We cannot castrate the human condition with the intervention of the damages that we circumstantially have.
If we want to contribute to the progress of human knowledge.
“… it would create reporting requirements demanding people like software developers and even volunteers within decentralized tech projects hand over data or conduct surveillance of their users.”
Mastodon is written as a free, web-based software for federated microblogging, which anybody can contribute code to, and which anyone can run on their own server infrastructure, if they wish, or join servers run by other people within the fediverse network. Its server-side technology is powered by Ruby on Rails and Node.js, and its front end is written in React.js and Redux. The database software is PostgreSQL. The service is interoperable with the decentralized social networks and platforms which use the ActivityPub protocol between each other.
Unfortunately, there is not much of content posted in the Mastodon Fediverse about Cardano specifically and Blockchain in general, or I cannot find it.