blenderdumbass . org
Articles
by Blender Dumbass
Aka: J.Y. Amihud. A Jewish by blood, multifaceted artist with experience in film-making, visual effects, programming, game development, music and more. A philosopher at heart. An activist for freedom and privacy. Anti-Paternalist. A user of Libre Software. Speaking at least 3 human languages. The writer and director of the 2023 film "Moria's Race" and the lead developer of it's game sequel "Dani's Race".
29 Minute Read
This is going to be a document that hopefully will explain 2D compositing to you, such that you will have an intuition for how it is done.
β© Reply
2D compositing is used in visual effects for various things. The most famous type of 2D compositing would probably be chromakeying, or replacing a background in a piece of footage shot on a green-screen.
β© Reply
β© Reply
But there are also things like:
β© Reply
- Paint-outs of objects you don't want to have in the frame.
β© Reply
- Compositing in elements that are shot separately.
β© Reply
- Or even compositing 3D renders for CGI shots.
β© Reply
Pre-computer compositing
Compositing didn't start on computers. As long as there was film, there was film trickery of some kind. So let's examine a few techniques early pioneers used to make their movies.
β© Reply
Rear-Projection
β© Reply
This shot, from 2001: A Space Odyssey by Stanley Kubrick is an example of a rear-projection shot.
β© Reply
You have the monkey-people which are just actors in furry costumes. And you have a piece of rock and some ground on the foreground. But the trick is. Everything else, in the background is footage taken elsewhere. This is a studio shot. But it looks like we are somewhere in a desert.
β© Reply
β© Reply
There is big screen on the background, behind the actors, with a footage of the background projected onto it from the back, so that the actors would not make shadows on the screen. The camera is positioned on the other side, looking through the actors and the bits of set that are real, onto said screen. This creates an illusion that they are in the desert.
β© Reply
β© Reply
This technique is so easy to use and gives such a convincing result that even modern movies use this technique. One good example, would be the 2013 Joseph Kosinski film Oblivion. Where they recorded high resolution high dynamic range 360 degree footage of the background and projected it onto a 270 degree wrap around screen. It gave Kosinski both the background and light of the scene. Had he used standard chromakeying for those scenes ( with glass and reflective surfaces everywhere ), it would have been a nightmare to composite later. While using rear projection allowed for the entire effect to be done, pretty much in camera.
β© Reply
β© Reply
In the modern day, using photo-realistic game-engine technology allows for some really cool uses of rear projection technology. Scenes could be rendered in real time and adjusted to the camera. Allowing the director and the camera operators to take rear-projection shots as if they are filming a regular set. While the background could be as "expensive", or fantastical as they want to.
β© Reply
Matte Painting
Matte Painting is when the additional element is not behind the actor, but in front. Which is usually still used to make set-extensions and fake the background in some way.
β© Reply
β© Reply
This is James Cameron doing a matte painting for the John Carpenter's 1981 film Escape from New York. The director wanted to see a particular type of skyline in the background of this shot. Which was not possible to shoot for real for one reason or another. So they brought an artist with very good painting skills ( in this case James Cameron ) to paint the sky-line onto a piece of glass. Shooting through this piece of glass, while the shot is perfectly aligned, makes a seamless effect.
β© Reply
Obviously this technique has serious limitations. For instance, each of those shots require a skilled artist to paint a photo-realistic painting for each angle you want to shoot. The camera movement is limited only to simple pans. And you cannot use strong dept-of-field effects because your background is literally the closest thing to the camera.
β© Reply
Today with digital painting software, such Matte Paintings are way easier.
β© Reply
In Camera Compositing
A film camera ( not a digital camera ) is exposing a strip of film through its lens, which chemically alters said strip. This is why film is stored in a such a way as to make sure it will not be exposed to no light. If you open a box of fresh film, you will instantly ruin by exposing all of it to all of the light in the room.
β© Reply
β© Reply
A camera stores said film inside itself in total darkness. And only exposes it to the light coming from the lens for a split second. There is ( usually a mechanical ) shutter that quickly opens up and closes, allowing some light to quickly pass through. The lens is used to focus said light onto the piece of film for a sharp and crisp image.
β© Reply
Here is a thing though. Turning on the camera in a totally dark room, or with a lens-cap firmly on the lens, will not actually use the film. Yes the film will mechanically go through the camera. But since there is an absence of light, none of said film will actually be exposed. Meaning you can re-use it. A very similar thing would happen if you are trying to take a shot of something completely black.
β© Reply
β© Reply
So for example, you want to make a shot of the same actor twice in the same shot. Maybe you are making a movie about clones or something. You could cover half of the lens with some sort of dark material. This would make it so you are exposing only one half of the film. Then you crank that film back. And re-use it again. Exposing onto it the second time. This time thought, you move the dark material to the other side of the lens. That is how you can shoot different parts of the frame at different times. Allowing you to have the same actor multiple times in the same shot, or, if you are clever, to do other effects this way.
β© Reply
For example, you can combine this technique with Matte Painting and draw a dark matte mask onto a piece of glass, to cover only a very particular part of the frame. And then you can use a reverse-mask ( a matte painting where the black and transparent parts are reversed ) to expose the other element into the shot.
β© Reply
Which brings us to...
β© Reply
Optical Printers
Imagine you want to distribute a movie back before computers could handle it. At the time where everything was done on film. You shot your film. You physically cut it and glued it together into a strip of watchable material. Now you need to make distributable copies of this film. How would you do that?
β© Reply
β© Reply
Well, you would point a projector with said film, onto a movie camera. This arrangement is called an "optical printer". If your focusing is good and the things are aligned properly, you might even have a very good copy. Though often copies lose some quality. It is very hard to focus light from one strip of film onto another. Therefore if you see an optical printer shot, it is usually a bit blurrier than the rest of the movie.
β© Reply
But now think about what you actually have here. You have a camera that is looking at a pre-recorded piece of footage. That means you can stick matte masks in between the projector and the camera, to do something similar to the in-camera compositing, but this time with way more control.
β© Reply
For example, instead of using a glass with a painted mask on it. You can use a third strip of film. The white parts on an image, are just transparent parts on the piece of film. They let all of the light from the projector through. While black parts on the image are completely opaque on the film. They block all the light. This is literally the same idea as with a matte mask painted of a piece of glass. But with a running strip of film you can have an animated mask, which would allow for some rather complex shots.
β© Reply
Early Blue Screen
β© Reply
Here is something rather interesting. On the original Star Wars ( from late 70s and early 80s ) they were already using a Blue Screen. But computers still weren't good enough, at this point, to handle digital compositing. So how did this work?
β© Reply
Well if you calibrate the color of the Blue Screen precisely, you can pass the film through an optical printer, to make a "copy" of it. But you pass the light through a blue filter that matches the color of the blue screen exactly. It will block any other wavelength of light ( or any other color ). And it would give you a piece of film that looks somewhat like this.
β© Reply
β© Reply
Notice how the Blue Screen part of the frame is totally white, while anything else is darker. The farther away the color is from this blue, the darker it is. Yet, if you would try to use this as a mask, you would have a ghostly effect, because most of the image is still too bright.
β© Reply
β© Reply
What you can do though, is increase the contrast of the film, by exposing it longer in the optical printer, or by developing it longer, later. If you time it precisely you might get a very good black and white mask, useful for optical-printer compositing. You can pass this mask once more through the optical-printer to get a Negative of it ( where the white and the black parts are flipped ), and use them to make your effect.
β© Reply
You will pass the background footage onto a new piece of film, through the mask where the white parts are the background. Leaving unexposed film, where it is black on the mask. And then you will pass the film though again, this time using the negative mask, to expose the foreground element, from the blue-screen shot. And here you go, you get yourself a blue screen compositing shot, using just optics, chemistry and mechanics. No computers needed.
β© Reply
Obviously, if you need to, you can hand animate a piece of mask and use that. Or use multiple masks, say to remove the operator from the shot. One mask will be done chemically by extracting the blue-screen. The other painted by hand. If you are clever, you can use the optical printer for a variety of cool effects.
β© Reply
Modern Compositing
During the digital visual effects revolution of the late 80s and early 90s, a lot of effort was put into the compositing software, to make it familiar to the artists who previously used optical printers. Similarly to how computer operating systems have concepts like "desktop", "folders" and "files" to make it easier to understand how to use them, so is the visual effect software. You still have a "cutting" tool in video editing ( usually represented with an icon of a razor ), even though nobody "cuts" film on a computer. And in compositing software you still have a concept of a mask. Or a black and white image that governs how two elements are going to be mixed with one another.
β© Reply
Channels
β© Reply
Take for example this render of the Honda Civic that I talk about in an article about VFX.
β© Reply
This is a digital image with transparent parts. But if you look into the file, you will see 4 color channels. 4 masks, so to speak. Each doing it's own very important role.
β© Reply
β© Reply
As you can see there is a mask for each of the primary colors Red Green and Blue. In memory it is just a black and white mask. But I also drew it for you with the color, so it will be easier to understand what is going on. Basically the Red ( or R ) channel represents how red something is. G how green and B how blue. And there is another channel in this image. A ( or Alpha ) that represents its mask, when you want to composite this image onto another image. This represents the so called transparency of the image.
β© Reply
β© Reply
If you add Red Green and Blue together you get white. Just red and green, gives you yellow. Just green and blue gives you cyan. And just blue and red, gives you magenta. You can imagine that adjusting the values of all 3 by varying the brightness of each of them, can give you the entire pallet of colors in between. The alpha channel is then applied as the regular mask, kind of like in the optical printer. To let the computer know where the color of be picture "below" it could be shown instead.
β© Reply
Layers
In the optical printer you could technically layer multiple elements, passing the image through multiple times. In the computer this is usually done using layers.
β© Reply
β© Reply
This is what the layers selector in GIMP looks like. Layers are multiple images with an alpha channel stacked one on top of another. But they are not necessarily mixed using the alpha channel as the mask.
β© Reply
Mixing modes
Because the color channels are stored as numerical data on the computer ( usually black to white is 0 to 1, with gray being 0.5 ) you can do more than just regular masking when compositing images together.
β© Reply
β© Reply
Usually the "Normal" mode of mixing will use the alpha channel mask. But you can also have multitudes of other options ( they might be different dependent on the software ). But often you will have things like:
β© Reply
- Screen - simulates two film strips projected onto the same screen without any masking.
β© Reply
- Add - mathematically adding the values of light from two images. Looks similar to Screen.
β© Reply
- Multiply - multiplying the value of one image by the values of another. If using standard black to white ( 0 to 1 ) range of brightness, it usually darkens one images by the other. A kind of anti-screen effect.
β© Reply
- Difference - creates a sort of mask where bright parts represent a lot of difference in color between two images. While dark parts represent similar colors.
β© Reply
And so on an so forth...
β© Reply
Take into account that some image formats can store color data outside of black to white range. Meaning there are colors brighter than white and darker than black. This is done by storing numerical values that could be lower than 0 or larger than 1. Those images are usually called "High Dynamic Range". And are usually stored in formats such as HDR or EXR. Doing Add or Multiply operations using those types of images might give you slightly unexpected results.
β© Reply
Compositing 3D data
When working in 3D software such as Blender, you can generate various useful maps for compositing, beyond just simple alpha channel. Inside of Blender under the view-layer properties you can see it gives a lot of options to choose from that you can enable.
β© Reply
β© Reply
Rendering them all out I can show you roughly speaking what those maps look like ( keep in mind that most of them use the High Dynamic Range color space, which means you will see only some of the data, like this, in this preview, in the browser ).
β© Reply
β© Reply
As you can see we have things like:
β© Reply
- Combined - or the render of the image itself.
β© Reply
- Depth - also called Z sometimes. It encodes the distances of each part of the image from the camera. They are usually stored in meters. So you will see only things closer than 1 meter away. Because more than 1 meter away it will use brighter than white colors to represent the distance. But as you can see I normalized the Depth for this preview squishing the data such that the furthest possible part will be white.
β© Reply
- Position - encoding X, Y and Z location of each point into R, G and B channels ( in meters ).
β© Reply
- Normal - encoding the direction into which each surface is facing. Again mapping XYZ to RGB.
β© Reply
- UV - the 2D position of each point on their respective textures.
β© Reply
- Diffuse - The non-glossy surfaces light calculation. Direct for direct hits of light. Indirect for bounce light from other object. And the color of diffused materials.
β© Reply
- Gloss - Same thing as with Diffuse but for Glossy materials.
β© Reply
- AO - or Ambient Occlusion. Simulates the darkening of areas, where it is hard to get for the light. Also somewhat simulates proximity between objects.
β© Reply
- Albedo - the color information of what the scene would look like without any materials or light.
β© Reply
As you can see a lot of those things are very useful things. For example the Z-Depth buffer could be used as a mask to insert elements into the scene. You could adjust it's brightness and contrast to get a mask at a certain distance from the camera, to place an element into that location.
β© Reply
Node based compositing
Layers are cool and all but they could be a little too tedious when you want to composite moving images. Therefore for more complex compositing jobs, Nodes are usually the best option.
β© Reply
β© Reply
With nodes you basically setup a sort of system that could be reused for multiple frames of the same shot. It will do the same exact set of operations for every next frame in your video.
β© Reply
β© Reply
In Blender specifically the nodes are going from left to right. On the left you have your input values. In this case the input image is the frame from star wars with the blue screen. On the right you have the output. You see this image is connected to two outputs. One is called "Composite", this would be the rendered image when you hit render. And the other one is called "Viewer", this is a preview node. Whatever you plug into it, you will see on the background under the nodes. And you can see the background shows the exact image I stuck that was inputted on the left.
β© Reply
β© Reply
Here for example I added a Separate Color node that lets me use any of the color channels independently.
β© Reply
β© Reply
Here I use a mix node with the Difference mode to get a difference map between the image and the color blue ( used for the blue-screen ).
β© Reply
β© Reply
Then using a color ramp I can add contrast to the difference map, and we get a descent looking mask.
β© Reply
β© Reply
I can then use the mask as a factor ( or mask ) when mixing between the original image and a black color ( for space is black ) and you get a primitive blue screen effect.
β© Reply
β© Reply
Of course Blender has a dedicated node specifically for chromakeying called Keying which already pre-built with a lot of useful features. And which usually does a very good job, generating a clean image with an alpha channel.
β© Reply
Notice I use an Alpha Over node instead of the Mix Node to add this image onto the black background this time. Alpha Over node is another one of those smart nodes that is designed to simplify your life a little bit.
β© Reply
As you can see the Keying node was not only able to cut out the blue part accurately. But also was able to reduce the overall blueness of the image. When you shoot on a blue-screen ( or a green-screen ) the color of the screen will often be reflected in the objects you are shooting. This node does some color-correction for your to reduce this "color spill".
β© Reply
Usually, though, you will still need to pass the color through some color correction steps before you can attach it to the background.
β© Reply
Let's go over some tools that are very useful.
β© Reply
β© Reply
RGB Curves ( also known as Color Curves ) are very useful for light color correction and adjustment of brightness and contrast.
β© Reply
You can see a diagonal line. This represents the graph that will alter the brightness values of your image. The X axis ( horizontal ) represent the input brightness. And the Y value ( vertical ) represents the output brightness. If the line goes straight from one corner to another, it means black is mapped to black and white is mapped to white. And there is no difference at all.
β© Reply
β© Reply
But if I do something like this, for example, it will increase the contrast. Because I tell the software to lower the brightness for the dark parts and make bright parts brighter. That is to say, to increase contrast.
β© Reply
β© Reply
Flipping the line entirely, will create a negative of the image. The bright parts are now mapped to the dark parts and dark parts to the bright parts.
β© Reply
As you can see, with RGB curves you can adjust individual color channels too, controlling the color balance of the image very precisely.
β© Reply
β© Reply
Color Ramp is another way of mapping brightness of the image to something else. In this case, on the screenshot, you can see that darkness is mapped to black and brightness to white. This gives us a black and white version of the image.
β© Reply
β© Reply
By moving the black and the white colors around, you can adjust the contrast of the image. For example, on this screenshot the black is moved more into the center. Meaning all colors that are darker than its position on the ramp, will be black. And only brighter colors will now start moving towards white.
β© Reply
β© Reply
Or you can go completely insane and map each brightness level to its own color. Making a strange kind of surreal effect.
β© Reply
Using color separate node you can input into the color ramps not just the brightness values, but other things as well. Like you can plug the saturation of the image into the color ramp. Or the hue. And then you can combine the channels together in the end, creating some interesting effects.
β© Reply
Example of a composited frame
β© Reply
In this example I have composited the blue screen element ( from Star Wars ) into the 3D scene I rendered earlier. As you can see it has 3 input images.
β© Reply
- The normalized Z-Depth buffer of the 3D scene with the car.
β© Reply
You can see each of the images are being pre-stretched to the same size, before anything else is done to them. I'm using a Scale node for that.
β© Reply
Then I use a Color Ramp node on the Z-Depth image, to create a mask of the car. Anything further ( brighter ) than the car becomes black. And anything closer, becomes white.
β© Reply
I flip the blue-screen image on the X axis by scaling it by -1. This way the spaceship is now on the right side of the frame. Then I plug it into the Keying node, which removes the blue screen from it.
β© Reply
I overlay the spaceship footage over the render of the car with the Alpha Over node first. And then I overlay the render back onto the image using the mask I made from the Z-Depth. And so the result is, it looks like the spaceship is coming from behind the car.
β© Reply
Custom Masks ( Rotoscoping )
In the pre-computer era, there was always an option of painting the masks needed for the optical-printer compositing by hand. For example, a frame ( of multiple frames ) from the movie, could be printed out on large piece of material, or projected onto said material from behind. Allowing artists to hand-paint a mask using said frame as a reference. Then photographing this painting in the proper conditions, with proper exposure and alike, would give you the desired film mask.
β© Reply
This could be useful for things that are hard to get a key from. Say you want to composite a stop motion animated monster into a live action shot. But you need to put said monster in between objects. There is no blue screen. You will have to make said mask manually.
β© Reply
On the computer you could also draw or render masks manually. But computer software usually provides people with some tools to make this sort of job easier.
β© Reply
β© Reply
For example in GIMP you have the Free Select Tool ( known also as the Lasso Tool ) which allows you to simply draw ( so to speak ) the boundaries of your mask, straight on the image. This "selection" in the end of the day, is just a temporary mask inside of GIMP, which is then applied to any next operation you do.
β© Reply
β© Reply
Inside of Blender you also have a Lasso Tool ( in the Movie Clip Editor window ), but unlike GIMP, this mask is not temporary, and the points of said mask are animate-able. Which means you can make a moving piece of mask useful in the Compositor. In the industry, this sort of manual masking is called "Rotoscoping".
β© Reply
β© Reply
Custom masks in combination with chromakeying are very useful. For example if you remember the frame from Star Wars it has a lot of stuff that we don't need. The stand, on which the model is mounted. The operator in shot. All of this could be masked out in a quick and dirty way. And then the rest could be handled by the chromakeying.
β© Reply
β© Reply
Especially today, with the need for 3D tracking, the blue / green screen itself is usually not very clean. It usually has little tracking dots all across its surface, requiring some level of garbage matting anyway.
β© Reply
Tracking
Which finally brings us to Tracking.
β© Reply
β© Reply
Modern Visual Effect software usually has a feature that lets you track certain features across multiple frames of video. You select what features you want to track and the software automatically finds the most similar looking thing in the next frame. And then the next frame. And so on and so forth. It is useful for so many things.
β© Reply
Having at least 8 of those points allows you to run a trigonometric algorithm on said movement, which could re-create the full movement of the camera in 3D space. This is very useful if you want to composite 3D ( or even 2D elements ) into a shot with a moving camera.
β© Reply
But even if you don't want to do something that complex, tracking a single ( or a few ) point(s) is also useful for compositing. For example, you can attach various elements to those points, so those elements would move with the footage. And with a few points you can even distort your element.
β© Reply
For example it could be useful to composite signs onto walls. Or things on the screen into footage where there is nothing on those screens. Usually in Hollywood they shoot screen with nothing on them. Maybe with a green or blue light ( so you could more easily get a mask of the screen's shape ). But if the camera is moving. So should also be image on said screen. So some form of tracking is usually used to achieve this.
β© Reply
Happy Hacking!!!
β© Reply
2
Find this post on Mastodon
2D Compositing for VFX explained
![[thumbnail]](/pictures/thumbs/compositing.jpg)
Blender Dumbass
π 24 β€ 2 π 1
This is going to be a document that hopefully will explain 2D compositing to you, such that you will have an intuition for how it is done.
#compositing #vfx #cgi #greenscreen #chromakey #opticalprinter
I bled for this Blender 3D Model
![[thumbnail]](/pictures/thumbs/building_108.jpg)
Blender Dumbass
π 31 β€ 3 π 1 π¬ 4
Telling a story of the difficulties I went through to model this one building in Blender. The cringe, and the trauma and all of the bleeding in between.
#vfx #cgi #architecture #blender3d #blender #b3d #movies #filmmaking #health #medicine #art #mastoart #3dart #blender3dart
Part 3: Developing a Way to do Action Scenes Without Money
![[thumbnail]](/pictures/thumbs/cgi_vfx_part3.png)
Blender Dumbass
π 23 β€ 1 π¬ 1
Since the last post in this series the project I've been working on became way more ambitious. The original problem I was trying to solve, was figuring out how to use computer graphics ( mainly Blender ) to cut costs for an action scene in a movie I'm developing. It has cars, so the idea was to try to render some CG cars into real life footage. You can go read Part 2 to see my attempts at this sort of thing.
#vfx #cgi #cars #blender3d #blender #b3d #movies #filmmaking
Developing a Way to do Action Scenes Without Money
![[thumbnail]](/pictures/thumbs/vfx_honda.png)
Blender Dumbass
π 83 π¬ 1
I'm in the middle of developing a movie in which there is a car chase. You know me I can't live without a car chase. But this time I'm trying to make the movie properly. By "infiltrating the movie industry" as some people suggested to call it. Which means ( since I live in Israel ) I need to write something that is possible to do in Israel. Getting $40 million to shoot a first feature film in Israel is not something that I can expect to happen. So instead I need to come up with something else.
#vfx #cgi #cars #blender3d #blender #b3d #movies #filmmaking
The Audition 2015 is an interesting short-film, when it comes to film history
![[thumbnail]](https://upload.wikimedia.org/wikipedia/en/thumb/4/4f/The_Audition_%282015_film%29.jpg/250px-The_Audition_%282015_film%29.jpg)
Blender Dumbass
π 12
You may think that 2023 Killers of the Flower Moon is the first time Leonardo DiCaprio have worked with Robert De Niro. In fact the first time was 30 years earlier in 1993 with This Boy's Life. Yet a lot of people will suggest Killers of the Flower Moon is still at least the first time both DiCaprio and De Niro worked with Martin Scorsese. That is also not true. That would be 2015's short-film, directed by Scorsese called The Audition.
#theaudition #martinscorsese #leonardodicaprio #robertdeniro #bradpitt #film #review #movies #shortfilm #vfx #cinemastodon
Part 2: Developing a Way to do Action Scenes Without Money
![[thumbnail]](/pictures/thumbs/vfx_honda_2.png)
Blender Dumbass
π 114
So a real cinema movie project is being developed, and as explained in my previous article about it I don't have the money to shoot a real chase scene. Instead I gonna use CGI as much as possible, to cut down the costs ( but not my sanity ).
#vfx #cgi #cars #blender3d #blender #b3d #movies #filmmaking
Is clarity in action a bad thing?
![[thumbnail]](/pictures/thumbs/CGI_cars_michael_bay.jpg)
Blender Dumbass
π 15 β€ 5 π¬ 3
I often hear how good action film-making is always clear and easy to follow. Camera isn't shaking like crazy and editing isn't filled with ADHD mania. Yet the more I think about it the less this makes sense.
#filmmaking #cgi #vfx #cars #movies #films #michaelbay #lucbesson #shazam2 #lucy #badboys #action #cinemastodon #b3d #blender3d
WTF is Hunderds of Beavers 2025 by Mike Cheslik?
![[thumbnail]](https://upload.wikimedia.org/wikipedia/en/4/4e/Hundreds_of_Beavers.png)
Blender Dumbass
π 2 π 37 β€ 2 π 1 π¬ 1
Wow, what an acid trip. The film ( if you can call it that ) Hundreds of Beavers by Mike Cheslik starring Ryland Tews and Olivia Graves is one of the coolest drug-less high moments of comedy cinema. It feels just coherent enough to be called a movie, and just crazy enough to be considered a 2 hour long low budget YouTube sketch.
#hundredsOfBeavers #film #movies #cinemastodon #comedy
Wrath of Man is why I love Guy Ritchie
![[thumbnail]](https://upload.wikimedia.org/wikipedia/en/thumb/a/af/Wrath-of-man.jpg/220px-Wrath-of-man.jpg)
Blender Dumbass
π 40
Wrath of Man is one of such, very Guy Ritchie, very crime trillery, very Quentin Tarantino- inspired films. It has just enough cold detachment, just enough calculated precision and just enough artful pathos to make you feel the false-psychopathy of the main character played byJason Statham.
#WrathOfMan #GuyRitchie #Film #Review #JasonStatham #JoshHartnett #Cinemastodon #Movies
Why Night Scenes in Video Games are so Hard to Make Right?
![[thumbnail]](/pictures/thumbs/night_light_hard.png)
Blender Dumbass
π 383 π¬ 2
Until yesterday I thought that street lights was impossible to make, so I was trying to find some kind of other ways to brighten up the frame at night, so at least it would be visible, but yesterday I stumbled upon something that might work.
#DanisRace #MoriasRace #Game #Gamedev #UPBGE #blender3d #animation #GTAClone #programming #project #light
Why Hitchock's "Family Plot" 1976 is so kosher?
![[thumbnail]](https://upload.wikimedia.org/wikipedia/en/thumb/3/3a/Family_plot_movie_poster.jpg/250px-Family_plot_movie_poster.jpg)
Blender Dumbass
π 9 π¬ 1
Alfred Hitchcock is known to be a hell of a filmmaker at the time of the code. When everybody were required to be kosher, Hitch found every loophole in the rule book to get us exciting stuff. He was able to make sexy and violent psycho-sexual thrillers when sex and violence were not allowed. His final film, 1976 Family Plot was already shot during the MPAA rating system. Other filmmakers like Brian De Palma took the thrown the master of the macabre. So what does Hitch do? He does the safest, most PG movie of his career.
#FamilyPlot #AlfredHitchcock #movies #film #review #cinemastodon
Why Dancer In The Dark is Unique?
![[thumbnail]](https://upload.wikimedia.org/wikipedia/en/thumb/b/bd/Dancer_in_the_Dark_%28Danish_poster%29.png/220px-Dancer_in_the_Dark_%28Danish_poster%29.png)
Blender Dumbass
π 76 π¬ 2
I came into the film without any knowledge that this film is a musical. I failed to see the obvious foreshadowing to it. The casting of the singer BjΓΆrk for the lead character, the word "Dancer" included in the damn title, the beginning of the film, where BjΓΆrk's character prepares to dance in a stage-play. All of that went over my head. And when the first musical number started I was pleasantly surprised.
Powered with BDServer
Plugins
Themes
Analytics
Contact
Mastodon