[icon ] blenderdumbass . org [icon scene] Articles

AI The Intellectual Laziness Of Humans

[avatar]  Blender Dumbass

March 11, 2023

👁 27

https://blenderdumbass.org/search?text=intelligence : 👁 1
https://blenderdumbass.org/articles/The_Inherent_Instability_Of_Euphemisms : 👁 1


Artificial Intelligence - the last frontier of the electronics. An invention that will alter the course of evolution. For the last few billion years humans slowly evolved an organ that gave us superiority among the animal kingdom. The brain. A machine of logic, reason, curiosity and knowledge. It brought with it civilization. Before there was any civilization people would mindlessly do nothing unless afraid, hungry or horny. Every other animal today just chills most of the time. They don't have jobs. They don't have art. They don't have laws or social responsibilities. Animals are lazy. And humans are not particularly that different from other animals. Since the dawn of civilization we fought against social responsibilities. We fought against hard labor. We fought for our right to do nothing and chill all day long. We invented machine after machine. We replaced hard labor of almost every kind and all due to our superior brain. Until we reached a point where we took it upon ourselves to replace the brain too.

If you ask a researcher in Artificial Intelligence how, for example, it recognizes images, his answer will be something about the neural network, or some similar algorithm. A neural network is often visualized as a net of sorts and because of it's complexity it's extraordinary hard to understand what it's doing. It's rather much simpler to explain and Evolutionary Algorithm to you. But even that will fail at telling the exact way a computer tackles any particular problem. Let's say that we can set up a simulation of a virtual environment. For example a little game. Where there are clear rules of how to loose and how to win. And clear controls, or settings to tweak, to win the level. For example: we can construct a randomly generated, road. And place upon it a car which will be designed by the computer. There will be a regular physics engine attached. And the car will break down over time from various bumps in the road. The rule is very simple: to get as far as possible down the road without the car breaking.

So you activate the algorithm and it generates a completely random car. Perhaps it has only one wheel and it's on the roof or something. It sends it down the road and it doesn't even go anywhere so it looses immediately. Doesn't matter. It can try another randomly generated car. This time it drives and after some time it breaks. It does it again and again, say a dozen times or so. And then the computer can compare the results of all the designs and slightly alternate the winning design randomly also about a dozen times. Each generation of such randomly generated designs will give more and more refined car that will go farther and farther and be stronger and stronger.

As a tool to generate cars it could be wonderful. And similar technologies are already in use in many places. But how much knowledge does this new car algorithm give the programmer? Well... in the case of the car it's probably not that substantial of a job to reverse engineer the design and understand how it works. But what if it's something more complex?

With the sizes of today's software it is nearly an impossible job to reverse engineer a normal compiled program. Somebody clearly understands this program, since the developer had thought about it and written it in a language that another program can read and understand. But then it was translated into machine code. Ones and zeros. Which is almost unintelligible even for the person that had written the compiler to do that job.

This same truth applies heavily with Artificial Intelligence. But this time, even the programmer doesn't understand how the damn thing works. Which poses two questions: Can we rely on something we can't understand? And does the fact that we remove the requirement to understand technology creates a precondition for devolution of the human brain that got us to this stage in the first place?



Sheiny was drawing on a graphical tablet with a stylus when Ivan ( Chloe's boyfriend ) entered and looked at her weirdly.

Ivan: What are you doing?

Sheiny: I'm trying to draw a cover for my book.

Ivan: By hand?

Sheiny: Not by hand. Aided with a computer.

Ivan: What are you? A dinosaur? We live in the age of AI.

Sheiny put her stylus down and looked at Ivan with a serious look. This is exactly when Mr. Hambleton entered the room. He was holding in his hands four cups of coffee.

Mr. Hambleton: These two are yours, Sheiny.

He placed two cups on her desk.

Mr. Hambleton: Do you want a coffee?

Ivan: No thank you. Ham, Is she a dinosaur?

Sheiny: I don't want to use no AI, thank you.

Ivan: Yeah, but you prefer hard labor.

Sheiny: Art requires sacrifices.

Ivan: It required sacrifices 10 years ago. Now the computer can do it for you.

Sheiny: Then the computer did it, and not me.

Ivan: So what? Also... You can now generate a whole book without doing anything at all, really.

Sheiny: Is there already AI that reads books for you as well?

Ivan: There might be.

Sheiny: Then humans are truly degrading.

While Ivan was processing what she said, Sheiny took a sip of coffee and frowned at it.

Sheiny: You forgot to put sugar.

Mr. Hambleton sipped his too.

Mr. Hambleton: Oh...

And he ran out of the door.

Ivan: What do you mean by degrading?

Sheiny: You have an organ... maybe... inside of your skull that thinks. This organ... by the way it's called "The Brain" ... it enjoys things like art and music and books. You know that books are almost exclusively intellectual? Those are strings of words that must be thought over and decoded to extract a meaning, that must thought over and decoded to extract a deeper meaning. As so on and so forth. Imagine tomorrow buying books so a computer will enjoy it and you.

Ivan: I don't like books.

Sheiny: Yeah?... Well, I do!

Ivan: Well, that's because you are a nerd.

Sheiny: And you are not a nerd?

Ivan: I am kind of a little nerdy. But not mega-mind, Jimmy Neutron kind... Like you are.

Sheiny: Well I train my brain.

Ivan: Wait the second...

Sheiny: What?

Ivan: So... you are a nerdy nerd and you don't like AI? What?

Mr. Hambleton returned with sugar. He put a few spoons into both of his coffees and gave the sugar to Sheiny, who stirred a few spoons into hers.

Sheiny: Your brain is valuable. The fact that humans can build AI is only due to our brains being so advanced at the moment. But what you are suggesting is to abandon the brain and degrade to animal-like lunacy, by relying on AI for everything that the brain does. If we will rely on computers too much, we will loose an ability to think.

Ivan: I used AI and I still can think.

Sheiny: I'm talking about evolution. We evolved the brain. And now we are actively trying to devolve it.

Ivan: So it's going to take millions of years? I mean... What's the problem to have AI now?

Mr. Hambleton: How about safety?

Ivan: What?

Mr. Hambleton: You've been in a very dangerous situation once, right? When you failed to be careful with technology.

Ivan: Ham, the thing is Free Software.

Mr. Hambleton: Hmm... it depends.

Ivan: I'm not talking about proprietary AI.

Mr. Hambleton: Proprietary or Free, we have no idea how it works. See, with proprietary software the user has no idea how it works. But at least the developer does. With AI nobody knows how it works. Including the developer. From the security stand point is an utter nightmare.

Sheiny: Well I don't think an image generator can cause the World War 3 to happen.

Mr. Hambleton: Are you sure about it?

Sheiny: Well perhaps somebody can deep-fake some president saying some atrocity. And it can start a chain of events that will cause a large war. But it's not AI that was the guilty one.

Mr. Hambleton: Yeah, but I'm not talking about deep-fakes. I'm talking about a minute but realistic probability that it will train for efficiency and find a loophole somewhere that will increase it's efficiency at a cost of somebodies freedom or even live. You know... AI is technically a stubborn psychopathic cheater with OCD. And those don't hesitate murder.

Ivan: What are you talking about?

Mr. Hambleton: You know that neural network don't really care about rules. And if we could set those rules, they can and will squeeze through every crack in the rule-package. And will end up killing millions and billions of people.

Ivan: What are you talking about?

Mr. Hambleton: You know how AI works?

Ivan sat in complete and utter confusion. He was thinking that he was talking to a lunatic.

Sheiny: We can't know how the AI works.

Mr. Hambleton: Exactly. And that's why we can't ever trust it!

Sheiny: Well, yes... this is a valid concern. But what I'm telling is that our brain is... well... With regular, human readable software, people advance forward with understanding of things like math and new concepts of computer technology. Even with proprietary software, somebody knows how the damn thing works. And this knowledge, maybe, will be spread with the people. At least it can be spread in theory. With AI, nobody knows how it works. It's literally a black box. So training AI doesn't really advance humans forward. What it does is merely substitutes thinking.

Mr. Hambleton: So we should not use AI because it makes us dumber?

Sheiny: I don't think that we should ban AI. But it should be regarded as a nasty thing to use too often. Like drugs. Because if we are going to rely on it, we are going to get dumber with each generation. Since people will not even want to know how technology works anymore.

Ivan: So then why are you drawing on the computer?

Sheiny: I'm not using AI.

Ivan: Yes, but you are using technology not to think too much.

Sheiny: That is not necessarily true. This program simulates real life materials. Pencils and brushes. I have to use the stylus the same exact way as in the real world.

Mr. Hambleton: Yes. But I think I understand what Ivan is saying. Sheiny, you have control Z. You can make an outrageous mistake and outright fix it. So you are not as concerned within the mind. Therefor you are using less brain to do the same work. And therefor you are degrading. Slightly... Not as with AI. But still degrading.

Sheiny: So what are you trying to say?

Ivan: You have to draw on paper.

Sheiny: Well, then I suppose I have to use quill and ink. No. I have to carve the drawing in stone.

Ivan: Yeah. Something like that.

There was an awkward silence for some moments.

Mr. Hambleton: That's why I'm talking about security. It's at least not as fallacious.

Sheiny: Tools are not means of avoiding thinking.

Mr. Hambleton: You just yourself proved that they are.

Sheiny: How much thinking avoidance there is between a person lifting a heavy box into a truck and a person using a forklift to lift a heavy box into a truck? I'm more willing to believe that forklift requires more thinking, while it's physically easier. This is a very good use of the brain.

Ivan: Yeah, but drawing with a computer... well you have control Z.

Sheiny: Yes, it is convenient enough. But also I have layers, file formats, alpha channel, dynamic filters and many other things, each of which increases the complexity of the job relatively to paper. Yes, ultimately I'm doing the job faster and the result is cleaner. But it's not simpler. I have to think rather hard. I can draw exactly as on paper. And it will look as terribly as on paper. But I want to do a good job, so I separate things into layers. I have groups of layer. Each layer has it's own blending mode. I have to deal with brush settings and filters. It's not easy!

Mr. Hambleton: Well how about the alien in Sinking In The Fire. We used 3D graphics to render it from different angles.

Ivan: Yeah, it's not like you've drawn every frame from scratch.

Sheiny: It's only theoretically simpler to use CGI than to use Krita. You know how much work goes into setting a proper camera track? How much time there goes into relighting... We had to design, build, rig, animate and compose the damn alien into every shot. It was not a trivial task. It's not like I've typed "Add and alien to this shot" and it added an alien to the shot. There was an enormous thinking process involved.

Ivan: Well, there is an art in typing AI prompts too.

Sheiny: It's as much art as producing. No disrespect to Mr. Humbert. But producing is basically just managing creative talent so they would produce something of value to you. It could be an art. An art of business I suppose. But not an art in a sense where you have to think through and decide everything. You know, like drawing. Like posing key-frames. Like working for two hours on roto-scoping a bloody hand for three hundred and seventy five frames.

Ivan: But why should you do something as repetitive as roto-scoping a hand so many times? You can code something to do it for you.

Sheiny: Exactly. Repetitive, uninteresting tasks should be removed as much as possible. And software can provide that without loosing the ability to understand how it's done. But it should not substitute the creative process. It's like - let's watch films using AI. Let's read books using AI. Why?... since, why would you watch a film? It's too long. It requires you sitting in a chair all that time and looking into one direction. It requires thinking about the plot and motivations of the characters. It's mental work. Why can't we unload that to AI? Well... I'll tell you why. I bloody enjoy doing that kind of mental work! As much as I like watching films, I like reading books. And as much as I like those I like writing or painting or filming, or sometimes even roto-scoping brings me joy. I don't want to give away my joy to a computer. Making it less painful? Yes. That's good. But taking it entirely away. What the hell is that even? Again... I'm not telling you that we should ban AI. But we should not shame people for trying to do things the old way. And we should embrace people that do not look for easy solutions and instead strive for self-growth, for understanding. Not for mere function and leisure.

Happy Hacking!

[icon question] Help





Subscribe RSS
[icon link] Author
[icon link] Website
Share on Mastodon


[icon question] Help








[icon articles]Help Us Hack The Software Industry!!!

  Unread  

[thumbnail]

[avatar]  Blender Dumbass

👁 88 💬 2



To Free Windows we need to use GNU / Linux.


#FreeSoftware #UserFreedom #Privacy #GNU #Linux #OpenSource


[icon petitions]Release: Dani's Race v2025-03-17

  Unread  

[thumbnail]


4 / 50 Signatures

[avatar]  Blender Dumbass

👁 34 💬 0



Dani's Race version 2025-03-17


#DanisRace #MoriasRace #Game #UPBGE #blender3d #project #petition #release


[icon articles]Is BeamNG Drive a Free Software Game

  Unread  

[thumbnail]

[avatar]  Blender Dumbass

👁 180 💬 0



Once upon the time, I remember feeling utterly unpleasant toward playing racing games where cars did not break. I thought that GTA games, for example, were far more realistic because making mistakes and crashing into something is going to break the car and therefor you have to drive more carefully. Yes I was a strange kid. But I did enjoy games such as Flat Out where the objective is to crash your car as much as possible. I think I liked games that simulate reality, rather then those that are just made for fun. You could imagine how excited I was when I saw videos of this new racing game that came out at about 2013 called BeamNG Drive. A game where cars don't just swap body shapes with pre-modeled deformations. But a game that simulates the destruction fully. Using soft body physics. I didn't play it. At first my computer was way too slow and there was no GNU / Linux support. Then the game became paid. Then I changed from being a mere "Linux user" to being a "GNU / Linux user", which meant that this game is not good for my freedom. But is it though?


[icon articles]Using AI For Art is to Disrespect Your Audience ( Testing Federation with Madiator's Blog )

  Unread  

[thumbnail]

[avatar]  Blender Dumbass

👁 40 💬 0



This is a test of federation. But I will be using it the way it was intended, for discussion and respectful disagreements between people. In this case I will be trying to counter argument the statement in this post by Madiator, while agreeing with some of it.


#madiator #fediverse #federation #test #blog #webdev #AI


[icon articles]Making Breakable Cars in Video Games

  Unread  

[thumbnail]

[avatar]  Blender Dumbass

👁 162 💬 0



We all love some mayhem when it comes to playing games. And nothing makes car games more satisfying than damage models. RockStar Games understood it early on, and all GTA games have breakable cars. Today some of the most popular car games like BeamNG.drive holding on a realism of damage models almost solely. And therefor for me, making Dani's Race any other way, would have not been a good idea. I knew I had to make the cars in my game breakable.


#DanisRace #MoriasRace #Game #Gamedev #UPBGE #blender3d #animation #GTAClone #programming #project #cars #damage #Gnu #Linux #Freesoftware #OpenSource


[icon articles]Supporting Richard Stallman's Political Discourse On Sex

  Unread  

[thumbnail]

[avatar]  Blender Dumbass

👁 86 💬 2



I have been given a goldmine link by Beans @MyBeansAreBaked@linuxrocks.online which is an article from a Free Software enthusiast Drew DeVault @drewdevault@fosstodon.org on his distaste towards Richard Stallman's @rms@mastodon.xyz views on sex. I will be criticizing Drew DeVault's views while supporting Richard Stallman's.


[icon articles]How They Made Freedom Illegal

  Unread  

[thumbnail]

[avatar]  Blender Dumbass

👁 70 💬 0



Freedom is illegal. There is not a single country in the world that is 100% Free. And it is not a mistake. If a country is 100% Free the government has no control. And therefor why bother trying at the elections. Right? We all are somewhat familiar with the tactics of how governments make sure that their countries are not free. They use the same 2 boogiemen every time:


[icon codeberg] Powered with BDServer [icon analytics] Analytics [icon mastodon] Mastodon [icon peertube] PeerTube [icon element] Matrix
[icon user] Login