Welcome to the new Film-Tech Forums!
The forum you are looking at is entirely new software. Because there was no good way to import all of the old archived data from the last 20 years on the old software, everyone will need to register for a new account to participate.
To access the original forums from 1999-2019 which are now a "read only" status, click on the "FORUM ARCHIVE" link above.
Please remember registering with your first and last REAL name is mandatory. This forum is for professionals and fake names are not permitted. To get to the registration page click here.
Once the registration has been approved, you will be able to login via the link in the upper right corner of this page.
Also, please remember while it is highly encouraged to upload an avatar image to your profile, is not a requirement. If you choose to upload an avatar image, please remember that it IS a requirement that the image must be a clear photo of your face.
Thank you!
If you double one dimension of the screen, in order to keep the aspect ratio the same, you have to double the other.
If you double the height, you have to double the width. If you double a double you get quadruple.
The way I understand, you calculate spatial differences then you calculate for compression. So, if you double or halve the frame size, you multiply or divide by four. Afterward, depending on what method you use to compress the individual frames, you decrease the file size, accordingly.
The same goes for temporal calculations. Going from one frame rate to another, you multiply or divide accordingly, then, depending on the way you apply I-frames, P-frames or B-frames (use your encoder settings) you shrink (or don't shrink) the file size even more.
Color settings, I don't understand as much. I rarely set color space manually. I usually use whatever color profile is appropriate for my intended display. I would decide on my color space before applying spatial and temporal sizes/compressions. Set it and forget it. Usually, color space is mandated for me, anyhow. (e.g. Cinema, Television, Computer, Print, etc. require you to use a particular setting for each.)
As I understand, all frames are I-frames in a DCP. Partial frames and Bi-directional frames aren't used. Essentially, time compression doesn't matter, much in DCP.
So, assuming color is set in stone, changing your frame size is the first factor in final file size then changing image compression comes next, followed by frame compression.
Is this right?
I'm also not your teacher, but let me try to explain some high-level, highly simplified concepts about modern, lossy compression.
Some of the hints are in the name already: lossy compression. Whenever you apply it, you're going to loose some of your original input data. While lossless compression guarantees you a one-on-one mapping of your data after a compression and decompression cycle, lossy compression certainly does not.
DCP's picture components are usually encoded in Motion JPEG2000. So, as you correctly implied, there is no time compression here, every frame is independently compressed and stored as such. Every frame still represents a full frame and isn't dependent on a "key frame", like in many of the MPEG and related compression formats.
So, here comes the great simplification: Consider formats like JPEG and JPEG2000 not so much as a one-to-one pixel mapping of the image, but more like a format that describes what defines those pictures, in such way that with sufficient detailed description, I can reproduce the original in such a way, that there is barely a visual difference between the reproduced and the original image.
Imagine a black square on an otherwise completely white canvas. If I would save every pixel individually, using the same color depth per pixel, a 4K image would take four times as much storage space as the same 2K image, like you correctly poined out.
But it hardly takes any more storage for the instructions to paint that same rectangle on a 2K or on a 4K canvas. Therefore, the JPEG2K compressed image of this 'black square on white" will not differ much between the 2K and 4K version.
The difference between a 4K and 2K JPEG2K compressed image is dependent on the complexity of the image (and some other factors like the quality loss factor, but let's asume they're constant). The complexity of the image isn't something that's easily quantified, but you can get a good general idea. Something that contains a lot of plain fields with roughtly the same color compresses pretty well for example. Small details, especially with lots of different colors in them, will compress pretty badly. One of the worst things to compress therefore is RGB noise. Noisy images also tend to be more complex than clean, smooth images.
Comment