[icon ] blenderdumbass . org [icon scene] Articles

AI The Intellectual Laziness Of Humans

March 11, 2023

👁 43

https://blenderdumbass.org/search?text=intelligence : 👁 1
https://blenderdumbass.org/articles/The_Inherent_Instability_Of_Euphemisms : 👁 1
https://yandex.ru/ : 👁 2
https://blenderdumbass.org/articles/Why_Morias_Race_Flopped_So_Fucking_Hard : 👁 1

[avatar]by Blender Dumbass

Aka: J.Y. Amihud. A Jewish by blood, multifaceted artist with experience in film-making, visual effects, programming, game development, music and more. A philosopher at heart. An activist for freedom and privacy. Anti-Paternalist. A user of Libre Software. Speaking at least 3 human languages. The writer and director of the 2023 film "Moria's Race" and the lead developer of it's game sequel "Dani's Race".


From 3 years ago.
Information or opinions might not be up to date.


16 Minute Read



Artificial Intelligence - the last frontier of the electronics. An invention that will alter the course of evolution. For the last few billion years humans slowly evolved an organ that gave us superiority among the animal kingdom. The brain. A machine of logic, reason, curiosity and knowledge. It brought with it civilization. Before there was any civilization people would mindlessly do nothing unless afraid, hungry or horny. Every other animal today just chills most of the time. They don't have jobs. They don't have art. They don't have laws or social responsibilities. Animals are lazy. And humans are not particularly that different from other animals. Since the dawn of civilization we fought against social responsibilities. We fought against hard labor. We fought for our right to do nothing and chill all day long. We invented machine after machine. We replaced hard labor of almost every kind and all due to our superior brain. Until we reached a point where we took it upon ourselves to replace the brain too.

If you ask a researcher in Artificial Intelligence how, for example, it recognizes images, his answer will be something about the neural network, or some similar algorithm. A neural network is often visualized as a net of sorts and because of it's complexity it's extraordinary hard to understand what it's doing. It's rather much simpler to explain and Evolutionary Algorithm to you. But even that will fail at telling the exact way a computer tackles any particular problem. Let's say that we can set up a simulation of a virtual environment. For example a little game. Where there are clear rules of how to loose and how to win. And clear controls, or settings to tweak, to win the level. For example: we can construct a randomly generated, road. And place upon it a car which will be designed by the computer. There will be a regular physics engine attached. And the car will break down over time from various bumps in the road. The rule is very simple: to get as far as possible down the road without the car breaking.

So you activate the algorithm and it generates a completely random car. Perhaps it has only one wheel and it's on the roof or something. It sends it down the road and it doesn't even go anywhere so it looses immediately. Doesn't matter. It can try another randomly generated car. This time it drives and after some time it breaks. It does it again and again, say a dozen times or so. And then the computer can compare the results of all the designs and slightly alternate the winning design randomly also about a dozen times. Each generation of such randomly generated designs will give more and more refined car that will go farther and farther and be stronger and stronger.

As a tool to generate cars it could be wonderful. And similar technologies are already in use in many places. But how much knowledge does this new car algorithm give the programmer? Well... in the case of the car it's probably not that substantial of a job to reverse engineer the design and understand how it works. But what if it's something more complex?

With the sizes of today's software it is nearly an impossible job to reverse engineer a normal compiled program. Somebody clearly understands this program, since the developer had thought about it and written it in a language that another program can read and understand. But then it was translated into machine code. Ones and zeros. Which is almost unintelligible even for the person that had written the compiler to do that job.

This same truth applies heavily with Artificial Intelligence. But this time, even the programmer doesn't understand how the damn thing works. Which poses two questions: Can we rely on something we can't understand? And does the fact that we remove the requirement to understand technology creates a precondition for devolution of the human brain that got us to this stage in the first place?



Sheiny was drawing on a graphical tablet with a stylus when Ivan ( Chloe's boyfriend ) entered and looked at her weirdly.

Ivan: What are you doing?

Sheiny: I'm trying to draw a cover for my book.

Ivan: By hand?

Sheiny: Not by hand. Aided with a computer.

Ivan: What are you? A dinosaur? We live in the age of AI.

Sheiny put her stylus down and looked at Ivan with a serious look. This is exactly when Mr. Hambleton entered the room. He was holding in his hands four cups of coffee.

Mr. Hambleton: These two are yours, Sheiny.

He placed two cups on her desk.

Mr. Hambleton: Do you want a coffee?

Ivan: No thank you. Ham, Is she a dinosaur?

Sheiny: I don't want to use no AI, thank you.

Ivan: Yeah, but you prefer hard labor.

Sheiny: Art requires sacrifices.

Ivan: It required sacrifices 10 years ago. Now the computer can do it for you.

Sheiny: Then the computer did it, and not me.

Ivan: So what? Also... You can now generate a whole book without doing anything at all, really.

Sheiny: Is there already AI that reads books for you as well?

Ivan: There might be.

Sheiny: Then humans are truly degrading.

While Ivan was processing what she said, Sheiny took a sip of coffee and frowned at it.

Sheiny: You forgot to put sugar.

Mr. Hambleton sipped his too.

Mr. Hambleton: Oh...

And he ran out of the door.

Ivan: What do you mean by degrading?

Sheiny: You have an organ... maybe... inside of your skull that thinks. This organ... by the way it's called "The Brain" ... it enjoys things like art and music and books. You know that books are almost exclusively intellectual? Those are strings of words that must be thought over and decoded to extract a meaning, that must thought over and decoded to extract a deeper meaning. As so on and so forth. Imagine tomorrow buying books so a computer will enjoy it and you.

Ivan: I don't like books.

Sheiny: Yeah?... Well, I do!

Ivan: Well, that's because you are a nerd.

Sheiny: And you are not a nerd?

Ivan: I am kind of a little nerdy. But not mega-mind, Jimmy Neutron kind... Like you are.

Sheiny: Well I train my brain.

Ivan: Wait the second...

Sheiny: What?

Ivan: So... you are a nerdy nerd and you don't like AI? What?

Mr. Hambleton returned with sugar. He put a few spoons into both of his coffees and gave the sugar to Sheiny, who stirred a few spoons into hers.

Sheiny: Your brain is valuable. The fact that humans can build AI is only due to our brains being so advanced at the moment. But what you are suggesting is to abandon the brain and degrade to animal-like lunacy, by relying on AI for everything that the brain does. If we will rely on computers too much, we will loose an ability to think.

Ivan: I used AI and I still can think.

Sheiny: I'm talking about evolution. We evolved the brain. And now we are actively trying to devolve it.

Ivan: So it's going to take millions of years? I mean... What's the problem to have AI now?

Mr. Hambleton: How about safety?

Ivan: What?

Mr. Hambleton: You've been in a very dangerous situation once, right? When you failed to be careful with technology.

Ivan: Ham, the thing is Free Software.

Mr. Hambleton: Hmm... it depends.

Ivan: I'm not talking about proprietary AI.

Mr. Hambleton: Proprietary or Free, we have no idea how it works. See, with proprietary software the user has no idea how it works. But at least the developer does. With AI nobody knows how it works. Including the developer. From the security stand point is an utter nightmare.

Sheiny: Well I don't think an image generator can cause the World War 3 to happen.

Mr. Hambleton: Are you sure about it?

Sheiny: Well perhaps somebody can deep-fake some president saying some atrocity. And it can start a chain of events that will cause a large war. But it's not AI that was the guilty one.

Mr. Hambleton: Yeah, but I'm not talking about deep-fakes. I'm talking about a minute but realistic probability that it will train for efficiency and find a loophole somewhere that will increase it's efficiency at a cost of somebodies freedom or even live. You know... AI is technically a stubborn psychopathic cheater with OCD. And those don't hesitate murder.

Ivan: What are you talking about?

Mr. Hambleton: You know that neural network don't really care about rules. And if we could set those rules, they can and will squeeze through every crack in the rule-package. And will end up killing millions and billions of people.

Ivan: What are you talking about?

Mr. Hambleton: You know how AI works?

Ivan sat in complete and utter confusion. He was thinking that he was talking to a lunatic.

Sheiny: We can't know how the AI works.

Mr. Hambleton: Exactly. And that's why we can't ever trust it!

Sheiny: Well, yes... this is a valid concern. But what I'm telling is that our brain is... well... With regular, human readable software, people advance forward with understanding of things like math and new concepts of computer technology. Even with proprietary software, somebody knows how the damn thing works. And this knowledge, maybe, will be spread with the people. At least it can be spread in theory. With AI, nobody knows how it works. It's literally a black box. So training AI doesn't really advance humans forward. What it does is merely substitutes thinking.

Mr. Hambleton: So we should not use AI because it makes us dumber?

Sheiny: I don't think that we should ban AI. But it should be regarded as a nasty thing to use too often. Like drugs. Because if we are going to rely on it, we are going to get dumber with each generation. Since people will not even want to know how technology works anymore.

Ivan: So then why are you drawing on the computer?

Sheiny: I'm not using AI.

Ivan: Yes, but you are using technology not to think too much.

Sheiny: That is not necessarily true. This program simulates real life materials. Pencils and brushes. I have to use the stylus the same exact way as in the real world.

Mr. Hambleton: Yes. But I think I understand what Ivan is saying. Sheiny, you have control Z. You can make an outrageous mistake and outright fix it. So you are not as concerned within the mind. Therefor you are using less brain to do the same work. And therefor you are degrading. Slightly... Not as with AI. But still degrading.

Sheiny: So what are you trying to say?

Ivan: You have to draw on paper.

Sheiny: Well, then I suppose I have to use quill and ink. No. I have to carve the drawing in stone.

Ivan: Yeah. Something like that.

There was an awkward silence for some moments.

Mr. Hambleton: That's why I'm talking about security. It's at least not as fallacious.

Sheiny: Tools are not means of avoiding thinking.

Mr. Hambleton: You just yourself proved that they are.

Sheiny: How much thinking avoidance there is between a person lifting a heavy box into a truck and a person using a forklift to lift a heavy box into a truck? I'm more willing to believe that forklift requires more thinking, while it's physically easier. This is a very good use of the brain.

Ivan: Yeah, but drawing with a computer... well you have control Z.

Sheiny: Yes, it is convenient enough. But also I have layers, file formats, alpha channel, dynamic filters and many other things, each of which increases the complexity of the job relatively to paper. Yes, ultimately I'm doing the job faster and the result is cleaner. But it's not simpler. I have to think rather hard. I can draw exactly as on paper. And it will look as terribly as on paper. But I want to do a good job, so I separate things into layers. I have groups of layer. Each layer has it's own blending mode. I have to deal with brush settings and filters. It's not easy!

Mr. Hambleton: Well how about the alien in Sinking In The Fire. We used 3D graphics to render it from different angles.

Ivan: Yeah, it's not like you've drawn every frame from scratch.

Sheiny: It's only theoretically simpler to use CGI than to use Krita. You know how much work goes into setting a proper camera track? How much time there goes into relighting... We had to design, build, rig, animate and compose the damn alien into every shot. It was not a trivial task. It's not like I've typed "Add and alien to this shot" and it added an alien to the shot. There was an enormous thinking process involved.

Ivan: Well, there is an art in typing AI prompts too.

Sheiny: It's as much art as producing. No disrespect to Mr. Humbert. But producing is basically just managing creative talent so they would produce something of value to you. It could be an art. An art of business I suppose. But not an art in a sense where you have to think through and decide everything. You know, like drawing. Like posing key-frames. Like working for two hours on roto-scoping a bloody hand for three hundred and seventy five frames.

Ivan: But why should you do something as repetitive as roto-scoping a hand so many times? You can code something to do it for you.

Sheiny: Exactly. Repetitive, uninteresting tasks should be removed as much as possible. And software can provide that without loosing the ability to understand how it's done. But it should not substitute the creative process. It's like - let's watch films using AI. Let's read books using AI. Why?... since, why would you watch a film? It's too long. It requires you sitting in a chair all that time and looking into one direction. It requires thinking about the plot and motivations of the characters. It's mental work. Why can't we unload that to AI? Well... I'll tell you why. I bloody enjoy doing that kind of mental work! As much as I like watching films, I like reading books. And as much as I like those I like writing or painting or filming, or sometimes even roto-scoping brings me joy. I don't want to give away my joy to a computer. Making it less painful? Yes. That's good. But taking it entirely away. What the hell is that even? Again... I'm not telling you that we should ban AI. But we should not shame people for trying to do things the old way. And we should embrace people that do not look for easy solutions and instead strive for self-growth, for understanding. Not for mere function and leisure.

Happy Hacking!


[icon unlike] 0
[icon left]
[icon right]
[icon terminal]
[icon markdown]

Find this post on Mastodon

[icon question]








[icon reviews]WTF is Hunderds of Beavers 2025 by Mike Cheslik?

[thumbnail]

[avatar]  Blender Dumbass

🔐 2 👁 31 ❤ 2 🔄 1 💬 1

[<3] 1



Wow, what an acid trip. The film ( if you can call it that ) Hundreds of Beavers by Mike Cheslik starring Ryland Tews and Olivia Graves is one of the coolest drug-less high moments of comedy cinema. It feels just coherent enough to be called a movie, and just crazy enough to be considered a 2 hour long low budget YouTube sketch.


#hundredsOfBeavers #film #movies #cinemastodon #comedy


[icon articles]Huge! Speed Dreams is Now on Git

[thumbnail]

[avatar]  Blender Dumbass

👁 201 💬 14



And I'm happy to tell you that Speed Dreams announced today that they are finally moving to Git. This is huge!


#SpeedDreams #gamedev #FreeSoftware #Gnu #Linux #OpenSource #gaming #SimRacing #Git #Programming


[icon articles]The Inherent Instability Of Euphemisms

[thumbnail]

[avatar]  Blender Dumbass

👁 57



Often it is required of a storyteller to say less in order to say more. Steven Spielberg had to censor the most gruesome parts of the holocaust in order to make a movie that was actually watchable, and his intuition was arguably right. The movie ended up being a hit, exposing millions upon millions of people to the the holocaust. But it wasn't the horror. It was a watered down version, made so people would not be too upset watching it. The reality of the situation was so much worse that Spielberg didn't even think a movie showing the actual truth was possible. Nobody would be brave or masochistic enough, he thought, to actually see it. A similar story happened to Dunkirk, another World War II movie, this time by Christopher Nolan, who deliberately avoided the worst aspects of a war film to make a film which the audience could watch without taking their eyes from the screen, and as a result, a film that is arguably scarier because of that. Nolan's masterful management of tension is so good that the movie doesn't need violence and blood to be visceral. And yet, to some extent the movie is a watered down version of what war supposed to be. And some argue it is a lesser film because of it.


[icon reviews]Transformers 3 has only 1 flaw

[thumbnail]

[avatar]  Blender Dumbass

👁 11



Megan Fox. Megan Fox is the only flaw of Michael Bay's Transformers: Dark of the Moon. The script by Ehren Kruger ( who wrote Top Gun: Maverick and F1 ) was written with Mikaela Banes ( Megan Fox ) as the girlfriend of Shia LaBeouf's character Sam. But because of some drama behind the scenes ( which involved Steven Spielberg for some reason ), she ultimately dropped out of the project, in very late stages of pre-production. Forcing the team to quickly patch her character out in a very forced and obvious way, replacing her with Rosie Huntington-Whiteley who worked with Bay on Victoria Secret commercials. That ultimately made the film very confusing, emotionally.


#transformers #transformersdarkofthemoon #michaelbay #film #review #cinemastodon #movies


[icon reviews]No One Will Save You

[thumbnail]

[avatar]  Blender Dumbass

👁 54



It's interesting sometimes what different artists do with the same material when this material is not bound by copyright. Good filmmakers like Kenneth Branagh can make wonderful adaptation of things like plays by Shakespeare into insane epics. Bad filmmakers like Rhys Frake-Waterfield can make awful twists on beloved characters, like the horror film Winnie-the-Pooh: Blood and Honey.


[icon reviews]Die Hard is a phenomenal showcase of filmmaking talent

[thumbnail]

[avatar]  Blender Dumbass

👁 5



Somehow, for all these years I avoided the first ever Die Hard movie, even though I had watched the sequels. And even though the sequels are varying in quality, the first film is a fucking masterpiece.


#diehard #brucewillis #film #review #action #movies #cinemastodon


[icon reviews]Spy Kids Armageddon

[thumbnail]

[avatar]  Blender Dumbass

👁 448



I did not expect Spy Kids Armageddon to be any good. It is written and directed by Robert Rodriguez. A guy who doesn't care about quality that much. His view on film-making was shaped by his first filming experience in 1992 when he made El Mariachi. A movie so cheap that it was weird to a lot of people that it was an action film. Rodriguez has this idea that he can cheat his way into making anything at all. Using as he says "creativity instead of money". So most of his movies tend to look like the Star Wars prequels. A lot of pretty noticeable green screen. Tons of CGI where most other people would use practical props. And strange camera work which is probably more dictated by the limitations of his methods, and not by actual directorial decisions.


[icon codeberg] Powered with BDServer [icon python] Plugins [icon theme] Themes [icon analytics] Analytics [icon email] Contact [icon mastodon] Mastodon
[icon unlock]