Aka: J.Y. Amihud. A Jewish by blood, multifaceted artist with experience in film-making, visual effects, programming, game development, music and more. A philosopher at heart. An activist for freedom and privacy. Anti-Paternalist. A user of Libre Software. Speaking at least 3 human languages. The writer and director of the 2023 film "Moria's Race" and the lead developer of it's game sequel "Dani's Race".
Since the last post in this series the project I've been working on became way more ambitious. The original problem I was trying to solve, was figuring out how to use computer graphics ( mainly Blender ) to cut costs for an action scene in a movie I'm developing. It has cars, so the idea was to try to render some CG cars into real life footage. You can go read Part 2 to see my attempts at this sort of thing.
The main two issues for me, when it comes to making it with CGI, was first of all, making it not look like the CGI cars in Shazam 2. Which looked very computer generated in the rendering department. And not make it look like the CGI cars in Havoc, where the rendering was solid, but the motion of the cars betrayed the fact that they were computer generated.
Thinking that I was very close to beating Shazam 2 in the rendering department ( even though I'm sort of still unsure about it ), I decided to use some of my time to train myself in animating the car. I did a few practice animations using footage of real life driving and the Honda Civic rig I already made for the earlier tests.
Mastodon post with reference attached. This was the first test that I did. It was from a video that had some drifting but also some basic slow driving. I used multiple clips of this video.
Mastodon post with reference attached. I decided, for some reason that I wanted to learn how to animate drifting properly. I mean technically it is important for a high-octane chase scene in the end of the day. But I guess I was just chasing exciting things to do.
Mastodon post with reference attached. Finally I took some time to do boring tasks. I was under the impression with this one in particular, that the car would sink backwards much harder as the acceleration is applied. If I animated it without the reference I would have exaggerated everything way too much, while in reality those movements are perceived more subconsciously, than consciously.
Mastodon post with reference attached. Adding to the list of very important boring shots, I animated this shot, which doesn't seems to be needing much animation. It is a car just going straight. What is there to animate? Well apparently there is enough little shake and micro jitters and other stuff that matters in selling something this simple.
Mastodon post with reference attached. This is when I pivoted to animating from shots in movies. This one is from Transformers. Michael Bay managed to capture an amazingly dynamic shot with a lot of energy to it. And I know that in the end of the day, the stuff I will be animating for the final project will be something at this level of energy. I ended up using a small script I wrote for Moria's Race for the camera shake.
Mastodon post with reference attached. This was an interesting find. It is from a Nicolas Winding Refn film Drive. It is both kind of high energy cinema stuff, and both a simple banal shot of a car just going in a straight line. A perfect combination of the two modes. And a good example of a kind of shot I would actually need to do in CGI for the film I'm making. The action is fast enough to prompt a full CGI shot.
Which brings me to the other problem I'm facing...
It is one thing to composite a CGI car into some camera footage. As you could tell from my previous articles on the matter, it is quite doable. It is a completely different thing to try to fully create a shot in 3D.
Back when I did The Package, The Car And The Time Is Running Out the most successful shots were compositing ones. The cars in that little test of mine were not nearly as good as even the Honda I made for the current tests ( and the Honda is nowhere near as good as the hero cars I'm planning to do for the final thing ). And yet once or twice, I tried doing fully CGI shots in that movie.
Why did I do that? What was the reason to try to do full CGI shots?
Well the reason was directorial. For the sequence to work the way I though I needed it to work, when I was designing the shots, I realized that that part of the action needs the camera to go with the action itself, moving at the same speed as the cars. If I would try to "film" it from a stationary position where the cars go by the camera ( which would be possible with just compositing ), the action would not be clear enough to make sense. And the eventual setup and payoff would be wasted.
I attempted to simply walk with the camera through the position of the shot, in real life. In hopes that I can speed the footage up to the chase speed, and stabilize it enough to make it work. But it didn't work. Which meant that the only way I could get this shot ( with the tools I had ) was to do the whole thing fully CGI.
I'm pretty sure making the movie that I'm currently making, I will have similar issues. Which means that I need means to make fully CGI renders of cars on streets, without any of that looking fake. Or at the very least with it looking believable enough, to be able to sell the trick properly.
At first I simply decided to make a library of common objects found on the streets in the towns close to me. Things like trash cans and alike.
After I did a significant amount of public infrastructure objects I decided it would be nice to copy one whole street. I would be looking at the real street and in order to copy everything from the real street into a digital one, I would go through making all of the objects required for it. It turned out to be not that simple at all.
My first problem was measuring the real street. I have only a small ruler. And maybe I could count steps between things. But nothing more precise than that. What I could do is shoot a bunch of videos with parallax of the street. And 3D track those video inside of blender. Which also will give me the positions of those 3D tracking dots. If I do that enough times from multiple angles, I may start to be able to recreate the layout of everything in 3D space. This is what I was doing on 28th of November.
.
Then I had to figure out the curb. Unlike my futuristic Moria's Race saga, the real world has little concrete blocks laid down in between the road and the sidewalk, called curb. I could go and model this whole thing by hand, but I decided that if the whole thing is so uniform and so repetitive anyway. And if I was developing for me a technique of making full CGI scenes, I could spend some time figuring out a procedural way of laying down the curb.
Originally I made a simple primitive curb distributor with a curve object and an array modifier. Basically, you edit a curve, and the curb duplicates itself enough times to fit the length of said curve. And then deforms itself into the shape of the curve. This is how anybody would have done something like this back in the day, before Geometry Nodes were a thing.
I did consider writing Python code for the automatic curb distribution. I didn't feel comfortable enough with Geo-Nodes, mainly because I had never truly try to learn them before. But now was the best excuse to dive deep into it. So I decided Geo-Nodes it is. And started trying to do something with them.
At first I was a stupid idiot trying to understand how anything works at all. I even watched tutorials about the damn thing. Which still didn't help me at all. Can you believe, me, Blender Dumbass, watching Blender tutorials. What? I saw the introduction video about Geo-Nodes by Andrew Price ( Blenderguru ), but in my opinion his explanation was dog-shit. He completely refused to mention the most important part about Geometry Nodes. The function generation.
In any other place in Blender, be it the compositor, or Shader Nodes, what flows between the connection is data. There is some data on the left and it flows towards the right. In the example above you can see 4 nodes starting with an image texture ( A ) and ending at the surface output ( D ). You can see that the image data goes into the Brightness and contrast ( B ) where some contrast is added. Then this, slightly more contrasty image goes into the Base Color input of the Principled BSDF Shader ( C ) which you can think of as the most common type of Shader in blender. And then this shader data goes into the object's surface. Everything is clear. There is no confusion. And everything makes sense.
So let's apply this logic to Geo-Nodes and Curbs.
Say I have a mesh like this.
And I want to make it look like this. With a curb magically at it's right place. I can try to think about it the way I would think about regular nodes data flow.
I would look at my mesh and think about how I would do this. I could for example put points on every edge ( every line ) and then put copies of the curb on those points. This sounds reasonable.
So I take my original geometry and pass it through Mesh to Points node.
Wait, no... this doesn't make any sense. I need the curb only where I need it, not everywhere. I suppose I can select the vertices of where I want the curb to be and then add those vertices into a vertex group, which I hope I can reference somehow from the Geometry Nodes.
Sure, a vertex group is a form of attribute. And I can reference that attribute. And it seems to not explode on me.
And it seems to be bunch up those points only around where I wanted my curb to be. But it is still not perfect.
Will the color-ramp node work to bunch it up a little closer? If I move the black close to the white, maybe it will make the effect of the vertex group stronger, so to speak.
Yay! It works. These Geometry Nodes are not that hard after all. I can do it.
I suppose all I need to do is drag an drop the object of the curb from the outliner and connect it to the points and...
...right! Okay... so they were spawned in the correct places. All I need to do now, is to rotate them properly and stretch them as so on. I suppose now I need to figure out how to do a for loop.
That is, ladies and gentlemen, where I got stuck. That is where Geometry Nodes broke me. Apparently there are no loops in Geo-Nodes. This is not a programming language where you can tell it to repeat the same operation for different items in the list. So how the hell I have a working curb system then? Well I found out about the function generation.
If you look back at the screenshot where the attribute node with the vertex group data is connected through the color-ramp and into the selection of the mesh-to-points node, you can see that the line is dotted, unlike the line coming from the geometry input node. That is because this dotted line represents not data, but instead a function. It is not the data of vertex group being passed through a color ramp, that moves the black closer to the white. But a function, which tells the geometry to take a value of the vertex group and apply to it the color ramp before using this value. Where? Well... on every single edge of the mesh, that I'm trying to turn into a point. This is the for loop. And I have not even realized it.
Every element in geometry nodes, be it a point, or an instance of another object ( which is what those spawned curbs are ) has various data points that are unique to every each and one of them. Which I can reference. And which I can do math and various other operations with. Meaning I could construct a function that does something. Which based on the different data on every instance, will do something slightly different to each of the instances. So here is what I did:
The curb modifier starts with a few inputs. The Geometry itself, the name of the vertex group where the curb should spawn, the 2 different models of curb ( long and short versions ), which will spawn based on various parameters around the object. And the 2 materials that the curb will be colored with in alternating pattern.
The two curb objects are being measured, so that later I could stretch them to appropriate size. And use the data for various other things. Like figuring out which one of them to use. If the edge is too short for the long curb, it will instead choose the short one and so on and so forth. To measure the size of the curb I use the bounding box. And I subtract the smallest min values from the biggest max values. This gives me the data of the size of the curb piece in X, Y and Z.
Meanwhile I use the geometry of the road and the vertex group to find the points. But as you can see this setup is a little bit more complicated. Well this is because I want each of the points to carry more data than it comes with by default. Data that is useful for the things I will need to do later, after the curb is in place.
For example, to rotate the curb so it sits on the edge that it spawned on, I need to capture the locations of both points that the edge is made out of. I can later snap the curb to one of those points and rotate it from there to the second one. The same I can use to stretch the curb to the correct size.
As you can see on the left side I reference the point positions ( which are different for each edge, hence the dotted line ). I capture this data per edge and then, after I converted those edges into points, I shove this data back into those points as custom attributes of V1_pos and V2_pos.
But it is not all. As you can see there is a lot of math between capturing the positions of the points on the very left and using them on the right.
I ran into an issue. Some curbs have a very visible beveled edge to them, which should be only on one side. The curb on the test object was randomly rotated the other way around. Which was not ideal. It meant that the vertices that make the edges are random to some extent. One edge could have V1_pos be on the left and V2_pos be on the right. And the next edge after it could have the two be the other way around. Which made the curb inconsistent. One curb is correctly rendered and the other is turned 180 degrees from where it needs to be. And it is truly random.
What isn't random, however, is the normal of the edge. If it is bent one way, the normal will point one way. So I could use the normal to figure out whether the two points need to be flipped. And flip them.
Now the fun part. Actually populating the curb on the points themselves. As you can see this section receives quite a number of data points. We have the two curb types and their selection from earlier. We have the true size of the curb. We have our custom V1_pos and V2_pos attributes. And we have a position of each point.
The curbs are being populated on the points. Then positioned to V1_pos. Then rotated to face V2_pos. Then stretched! But not so fast. They aren't just stretched from V1_pos to V2_pos. If there is enough distance between the two, they are duplicated. Adding more curb in between. Which is what this large section is the bottom is doing. It is basically a lot of math that figures out how many curbs there should be on the edge, and where each of them should be and so on and so forth. Then in the end a small random offset is added to the curb, to make it feel less robotic. And off they go...
... to get colored with the two materials. This one is very simple. Every list of items in Geometry nodes has an index. Or the position of this item on the list. So if I want every second curb to be black and every first one to be white. I can do some Boolean math on the index number and generate a function to drive the decision-making of the material selection.
Finally we combine it with the original mesh, and send it off for good.
Obviously, now that I know how to use Geometry-Nodes ( more or less ) I could do other crazy setups. Like this one, where I use a collection of hidden objects to add manholes and concrete patches to the road texture.
This one is a way simpler setup. Yet one with some rather unique sorcery. For example it captures the material and the UV map of the driver object and applies it to the target object. Apparently you can do this in Geo-Nodes.
Then I spent some time developing a geo-nodes system for distributing bushes. Those kinds of bushes that they have on the side of the road everywhere here seems to be a nightmare to do by hand.
For some time I was struggling to make a building. I had to measure one using paces. I basically counted my steps, and measured one step, to roughly recreate one building.
Then I had to figure out how to make it inside of blender, in a more or less procedural way. I used basic collection instancing here. I made a building rotated straight, in a different collection. And then instanced it into the scene with the appropriate rotation. Now I can copy-paste this building, or move it, or do stuff with it relatively quickly, as if it is one object. While having all of the setup inside of it ( with all of the ACs and windows and other decor ) technically be separate objects. Also I learned that dirt texture is an important bit for buildings too. Without it it immediately looks fake.
BTW this tree in the shot is only 3K polygons. Which I think is an achievement.
So this is what I've got so far. The scene starts to look very realistic from a certain camera position. And I can get rather impressive looking renders out of it. I wouldn't call this done yet, since it is still not real enough. But it is damn close and I feel like what I'm trying to do is doable here. I just need to focus and grind through it.
I learned a lot already. And I'm thinking I will have a very good understanding of this whole thing as I will get more and more ready to finally do the actual shots for the movie itself. Now in the script everything happens at night and during the rain. Which is both a blessing and a curse. Technically me trying to pull off this render in broad day-light is a bit insane. Especially because at night and during the rain I can get away with more, because you see less almost by definition. But on the other hand, it is a curse because everything I've been doing until now is dry. There is no wetness maps on any of the assets I made or any of the materials. And I will have to figure it out. Also not everything is wet during the rain. And the rain itself is a huge particle simulation with a lot of moving parts and potentially some fluid dynamics. Which could be a huge pain in the but.
But again... I just need to focus and grind through it.
So a real cinema movie project is being developed, and as explained in my previous article about it I don't have the money to shoot a real chase scene. Instead I gonna use CGI as much as possible, to cut down the costs ( but not my sanity ).
Since the last post in this series the project I've been working on became way more ambitious. The original problem I was trying to solve, was figuring out how to use computer graphics ( mainly Blender ) to cut costs for an action scene in a movie I'm developing. It has cars, so the idea was to try to render some CG cars into real life footage. You can go read Part 2 to see my attempts at this sort of thing.
I'm in the middle of developing a movie in which there is a car chase. You know me I can't live without a car chase. But this time I'm trying to make the movie properly. By "infiltrating the movie industry" as some people suggested to call it. Which means ( since I live in Israel ) I need to write something that is possible to do in Israel. Getting $40 million to shoot a first feature film in Israel is not something that I can expect to happen. So instead I need to come up with something else.
I often hear how good action film-making is always clear and easy to follow. Camera isn't shaking like crazy and editing isn't filled with ADHD mania. Yet the more I think about it the less this makes sense.
As you probably have guessed by my username, I know Blender. And I also know how to make SuperTuxKart addons. So I decided to make a Fedi-based series of tutorials explaining how I go from zero to a finished addon. This first part focuses on Karts ( since they are a bit simpler).
There is a certain sense among cinephiles that the danish filmmaker Lars Von Trier is nothing but an edge lord, making his films simply as a sort of pornography designed to outrage people. It does not help his case that his films are some of the hardest films to watch. And it doesn't help that his movies tend to touch upon uncomfortable things in very uncomfortable ways. Yet I don't believe Lars Von Trier does any of that for laughs.
There are more than one crash in Moria's Race. But the most exiting ones happen during the race of the movie. And because this kind of thing was hard for me when I was starting out with Blender, I suppose it will not be a terrible idea to write about how I did it.
You may think that 2023 Killers of the Flower Moon is the first time Leonardo DiCaprio have worked with Robert De Niro. In fact the first time was 30 years earlier in 1993 with This Boy's Life. Yet a lot of people will suggest Killers of the Flower Moon is still at least the first time both DiCaprio and De Niro worked with Martin Scorsese. That is also not true. That would be 2015's short-film, directed by Scorsese called The Audition.
You will probably laugh at me right now, but so far I've seen 1973 George Lucas film American Graffiti just twice. First time I stopped at about 25 minute mark, not getting the hype around the movie. This time I decided to get through to the end, no matter what it takes. And believe me, if you are not laughing at me right now, you don't understand. This is fucked up! Holy fucking shit!!! I see it now!!! I see it!