"They are taking a mild issue and trying to fix it with absolute apocalypse!" - this became a sentence I like to repeat in various comment sections since I
calculated the impact of mass surveillance on well-being of the population. Even in relation to murder, a huge problem, even in relation to the worst offenders in this category, murder is a mild issue, compared to the absolute apocalypse that is total mass surveillance. If you are not convinced, click on that link and go see my calculations for yourself.
But since then I started noticing that this quote is applicable to so much more. They took a mild issue, teenagers feeling a bit sad, and tried to solve it with absolute apocalypse: Banning all teenagers from engaging with any internet platform that allows users to post things. You don't believe me, read my article about
that new Australian law. I specifically avoided talking about anything else but the age-discrimination aspect, because I thought that people will use the comment section to point them out and discuss everything. But I guess I should address it here.
They specified a penalty of $50 million for any social media operator that allows people under 16 to engage with it. They
stated that "the law does not require users to upload government IDs as part of the verification process". But how else would it work? So any forum or other platform that lets you upload stuff will need to completely verify your identity before letting you interact with anything at all? They are not required to use the ID. But do they have a choice? $50 million at stake here.
What about
Mastodon? What about small self-hostable forum software that kids could and should host themselves? Are kids going to pay $50 million because they want to have a small community for themselves on the internet? Here is a
funny article on the fediverse that pokes fun at this stupidity.
Yes, the government says not all platforms will be under the law, but it just makes me think that they wanted a good reason to implement a law that will give them total power to choose who to shut down. And as always they are using kids for that.
I could have made a calculation similar to the one I did for the mass surveillance, but I hope that this is obvious. I hope that you can see how out of proportions the response is to the cause. It could have helped somewhat to, maybe, discourage kids from using social media that they feel not so good about. By say, talking to them about the issue in school or something. That could have produced some good, relative to the problem at hand. But the law that Australia put forward is so much worse than what it solves that I fail to understand how they allowed it in the first place.
They are taking a mild issue and trying to fix it with absolute apocalypse!
Every time that the implementation of the law outweigh any potential benefit of it, you kind of have to ask yourself: Are they doing this for a reason? For a reason that they don't want me to notice?
This is not some kind of conspiracy theory. Law maker are people. They are people with ambitions for power. This is why they went into politics to begin with. They want to get some control over something. Often even for good reasons. But they delude themselves that they could control their solutions. They delude themselves that the problems they have are so important that nothing should stay in the way. That everything should just yield immediately to the will of those people, because look - something important.
I see most social media as absolute dumpster of insanity, powered by totalitarian tech that disrespects everyone
it uses for its gain. They should be stopped! Yes! I can see it. But should we in pursuing it make everything so much worse?
The idea of social media is not evil in its core. What evil are certain implementations of it. Certain trends and technologies. The privacy invasive, engagement optimizing algorithm, for example, is a big part of why the problem exists. The algorithm, if there is any, should be under total control of the user. But it isn't! The company should not be in control over the algorithm. But they are! If the user wants to be recommenced something the user might find interesting, it should not be a problem. But if the user doesn't want to see something, it should not be shown to the user. Therefor the user should be the one deciding on how the algorithm decides what to show and what not to show. Instead, there is an
AI black-box that does all of the decisions, that is optimized only for one thing, and it is, to keep the user on the platform. And this is the problem!
Mastodon takes the algorithm ideas and makes it without the algorithm. Not perfectly, but much better than any algorithm so far could do. You can follow people and see all their posts, or you can follow hashtags and see every post with that hashtag. That's all. The user controls what the user follows. And therefor controls what the user gets in the feed. The problem of the algorithm is not present here.
No, Australian government wants to simply ban all social media to all people under 16. Making the problem not go away. Because everybody over 16 will still get the same crappy algorithm. They will still get the privacy violations. They will still get all the problems with all the social media anyway. And because to verify the user's age, the user will most definitely have to provide more personal data. This will make the problem even worse. Not even talking about all the points I just talked about, how this will cripple all the social media that is actually trying to be good. Or all the small forums operated probably by only one person that does not have $50 million or any age verification technology. Or all the small communities of those kids, we are so trying to protect.
They are taking a mild issue and trying to fix it with absolute apocalypse!
A better law would be to give the user the control over the algorithm, for example. But I bet the social media companies already argued that this is practically impossible to achieve, because they themselves do not have any control over the algorithm. Well... Then perhaps you should make sure the user can disable the algorithm. Or even better can use a third party algorithm. Which meets what the user wants better. The EU has actual progress in this area. Maybe probably because the
Free Software Foundation Europe lobbies a lot for good laws in there. And probably because of all its
Pirate Parties that has some influence there too.
But that did not stop the EU too from deciding that "chat control" is a good idea that is worth considering. Again they took a mild problem and tried to fix it with absolute apocalypse. A problem that sounds very bad, but if you
calculate the impact of the problem on overall freedom and compare it to what it does to solve it. You can see that it is so out of proportions that it is insane to even consider.
Perhaps here too the government just wanted more control and just wanted some excuse for it, so that people will stop arguing with them about it. And now, I hope, you know how to spot an evil law.
Happy Hacking!!!