Announcement

Collapse
No announcement yet.

Warp on dual DCI projectors

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by Steve Guttag View Post
    I have asked for the Anamorphic Factor to work for years now (on the first ICMP that I encountered). It wasn't an omission. There is a philosophical difference behind it. It's kinda like needing xenon light sources for a client that desires/needs them, Barco will just be summarily excluded from the candidate list like any other product that doesn't meet the client's requirements. It is no different than omitting a multi-channel analog input on sound processors, if the project needs that feature, the CP850 and now the CP950 are excluded from the candidate list.
    Yeah, sure. But still, there is a difference... at least to me and it can be particularly frustrating. Because, it's hardware v.s. software, a CP850 or CP950 will never grow a mc analog port and nobody really ever designed those processors with REAL expandability in mind. Maybe the expansion slot on the CP950 can eventually be used for something different than a Dolby Atmos extension, but I doubt it will ever happen. But with this, it's just a few lines of code somebody needs to put in there... Yeah, sure, those lines need to be tested, maintained, supported, etc., but still...

    As for Barco's apparent dislike of anamorphics, is there any reason they've given you that they don't want to support those configurations any longer? Maybe this is related to their "laser only" strategy? I can imagine that anamorphic lenses and laser light sources can potentially be problematic.

    Comment


    • #17
      Barco has, falsely, claimed that anamorphics were not DCI compliant (they are, always have been). As for the CP950 growing an Analog Muti-channel input...it is certainly possible. The Dolby Multichannel Amplifier in the 16 and 24 channel versions, have analog inputs and it is an AES67 device...so...that could be a combination of software and possibly minimal hardware alteration to allow those inputs to get on the AES67 buss. Then again, software would allow a CP950 or, possibly, CP850 or IMS3000 to "grow" some analog inputs. Being a Q-SYS guy...when I see an AES67 system, I look at all inputs and outputs as routable signals. I suspect that Dolby didn't think that one out despite, really, having everything they would need to make such a feature (so, perhaps some physical connections don't exist or DSP processing to route those channels may not exist). But, I digress.

      I agree, the Anamorphic Factor issue is strictly being obstinate about something that would require little effort to maintain.

      As for laser, Anamorphic Lenses benefit there as much, if not more. The key to laser life is running them lower/cooler. If you could lower your laser power by 23%, you pick that up in terms of lifespan (something has to justify the cost of the anamorphic) or possibly being able to use one size smaller projector, which could save over $10K, right off the bat but it would depend which model projector one is switching between.

      As for side effects, you raise a good point since, even with conventional lenses, convergence on RGB laser projectors is impossible corner to corner without some trickery (Christie's Cine Life + warping is warping the colors individually, across the image, to get them to appear lined up). Barco denotes, in their specs the differences between the "C" lenses and the "B" lenses on their SP4K-xxC projectors with their "Lateral Lens Color On Edge Of Image).

      Comment


      • #18
        Originally posted by Marcel Birgelen View Post
        ...Warping the image is still a bit problematic, because in DCI, touching image content needs to be done inside the secured media itself or very close to the "edge", as you can't do any warping on encrypted content.
        Why?

        Encrypted or not, we know that every frame of a digipic is exactly X-pixels wide and Y-pixels tall and each frame is exactly the same size and shape as every other frame. We don't need to know the actual content of any given frame of the movie to know the Cartesian dimensions of any other frame of the movie.

        Why can't we make some subroutines that remap every pixel of "Input-XY" to a new location of "Output-XY," in the clear before passing an encrypted frame, plus a kernel file, to the rendering engine for display, which just performs a convolution matrix (or some such thing) before it displays the image?

        We've got home computers with graphics cards inside that can do that math, standing on their heads.

        Why can't cinema projectors...supposedly the greatest technology the world has ever seen ...do the same thing that my broken down, fifteen year old, Piece-O-Crap computer can do without breaking a sweat?

        Comment


        • #19
          Originally posted by Randy Stankey View Post
          Encrypted or not, we know that every frame of a digipic is exactly X-pixels wide and Y-pixels tall and each frame is exactly the same size and shape as every other frame. We don't need to know the actual content of any given frame of the movie to know the Cartesian dimensions of any other frame of the movie.

          Why can't we make some subroutines that remap every pixel of "Input-XY" to a new location of "Output-XY," in the clear before passing an encrypted frame, plus a kernel file, to the rendering engine for display, which just performs a convolution matrix (or some such thing) before it displays the image?

          We've got home computers with graphics cards inside that can do that math, standing on their heads.

          Why can't cinema projectors...supposedly the greatest technology the world has ever seen ...do the same thing that my broken down, fifteen year old, Piece-O-Crap computer can do without breaking a sweat?
          First of all, warping is more than just remapping pixels from A to B, it also involves some form of anti-aliassing algorithm, where you need to take the neighboring pixels into account. I doubt your 15 year old piece-o-crap computer can do that without breaking a sweat to be honest, at least not at 4K and at framerates up to 120 fps, which is what those systems do support. But that being said, any modern kind of GPU can do it without much problems.

          Secondly, you can't run any pixel-remapping algorithm over encrypted data, I hope it's obvious why. While there is some far-fetching research about how you can apply "work" onto encrypted data, most of it is still highly theoretical and it needs special encryption schemes, which aren't employed by DCI. So, any image manipulation in DCI systems has to be done in the small parts of the entire chain, where the data is unencrypted. Those are also the most heavily-guarded parts of the entire chain, which complicates anything you do here. Every piece of code in there, will require certification. That certification is required to make sure you're not making a "backup" of any of those pixels flowing through that code...

          Comment


          • #20
            Warping on existing, mainstream DCI projectors is a fun thought experiment, and for that I can see two extremely unlikely, virtually impossible solutions:

            1. I don't know much FPGA programming yet, so I might be way off, but maybe the ICP could handle this. It does apply LUTs to pixel data in realtime. It also has a scaler that applies some amount of antialiasing or filtering. Curved edges on screen files are also antialiased, although I suspect this is rendered once as an alpha mask and saved, and then that pre-rendered mask is just alpha blended over the content. It can support HFR and 4K. So maybe a more fleshed-out scaler in the ICP that has warp points isn't so far fetched, but with the caveat of limited framerate or resolution options.

            2. Software that the technician uses during image alignment to configure the warp points needed, moving the points around to where they belong on screen. This would run on a technician laptop plugged into a DVI port. The software would then generate an auditorium specific file that is sent to the DCP distributor. The distributor would then apply that warp data to the DCP master to generate a bespoke, pre-warped DCP, encrypt it as usual, and send it over. Aside from all of the logistical and political reasons this would never happen, it is absolutely technically feasible and would work on literally any existing system.
            Last edited by John Thomas; 12-30-2021, 02:03 AM.

            Comment


            • #21
              If Christie is able to pull it off in their most recent projectors, then I'm sure that both Barco and NEC could do too, if they want to, that is, because the market is somewhat limited. Although the general availability of warping could do some interesting things, especially for deeply curved screens.

              I'm pretty positive, the curved edges in the digital masking are indeed just a pre-calculated pixel map alpha-blended onto the content. That's something Randy's "piece-o-crap computer" probably still can do real-time.

              Well, it's been a while since I toyed around with FPGAs, interesting but very time-consuming stuff... There are multiple FPGAs on the TI ICP and TI is being sketchy on-purpose on their purposes. The scaling to me looks like a bi-linear affair, which is something that should be pretty easy to solve with any decent FPGA, even without a LUT.

              A LUT based approach for warping would certainly work and could be very efficient. I'm not sure if you could fit it into the existing ICP pipeline though, because as you indicated, it's already applying LUT-based transformations to pixel data. The warping would involve an extra step and it's pretty hard to say if the existing FPGAs would be able to do one more processing step and if there is sufficient SRAM available.

              The pre-warping of DCPs is an interesting concept, but as you're indicating yourself, the entire supply chain would need to be modified and rendering a pre-warped DCP per site will make that process much more complicated. I've been advocating full-resolution anamorphic scope releases for years, for those locations who do have anamorphic lenses, but DCI won't allow "anamorphic pixels" inside the DCP...

              Comment


              • #22
                I'm just thinking on my feet, here. Okay?

                Every frame of a movie must have the same pixel dimensions. Right?
                We already know where every one of those pixels must go but we just don't know what color/brightness each of those pixels needs to have.
                We take some information that says what those colors are supposed to be and put that data into their respective pixel-boxes.
                The rendering part of the projector has a magic box that does all that.

                Yes, I know that it's a lot of computation that's not exactly trivial. Anti-aliasing and denoising are also problems to solve.
                I understand that the pixel dimensions of different resolution screens make a big difference in the amount of computational work to be done.
                Doubling the number of pixels in one direction, keeping the aspect ratio the same, means four times the work.
                Frame rate is your "bonus multiplier."

                I'm sorry if it wasn't clear that what I said about fifteen year old Piece-O-Crap computers was hyperbole.

                Okay... So you've got "Data-In" ==> "Magic-Box" ==> "Display."

                If your data is encrypted, you've got "Data" ==> "Magic-Box(Magic-Box)" ==> "Display."
                (Where the second item is the rendering occurring inside the decryption section.)

                If you want warping on top of decryption you have "Magic-Box(Magic-Box(Magic-Box))."

                That's a lot of magic! Strong magic! If you don't get the incantations right, the magic won't work.
                It's a very computationally intense process that can get exponentially more difficult and the code is difficult, time consuming and expensive to write and certify.

                But, encrypted or not, we still know where our pixels should go.
                Really, if you think about it, the decrypter doesn't actually need to know anything about pixels. It only really needs to know the ciphered data and the key to generate the decyphered data. I don't know, it might be helpful for it to have access to the pixel dimensions but I don't think it's strictly necessary. Is it?

                So... we make a kernel file and place it where every part of the system can find it when it needs it.
                We give the encrypted frame to the rendering part of the system and we give it the key.
                The rendering part takes that information into the security zone and gives it to the decryption section.
                The display section receives the decrypted information and puts the pixel colors in the boxes where they belong.
                If there is warping involved, the convolution-matrix-or-something uses the kernel file to move pixels around before the display section gets it.
                That information can be made available so that anti-aliasing and denoising algorithms can do their magic, too.
                This is all taking place inside the security zone so it shouldn't be a problem. Will it?

                I know that this isn't just one of your average holiday games, we're playing, here.
                It ain't easy! But I feel certain that it can be done.

                A friend of mine runs a company that makes goal cameras for ice hockey rinks. He writes his own software, ground up, and installs his own cameras.
                He specifies the equipment he wants from his suppliers but, no, this isn't off-the-shelf stuff.

                He's got computer programs that can detect a hockey puck, going over the goal line at speeds in excess of 100 MPH.

                The whole thing runs on a regular iMac computer. He chose it because it's an all-in-one computer that you can put into a road case, transport and set up in five minutes.

                I helped him do an installation and I was there when he put the computer together and made it work.

                If one guy can do that (I concede that his degree is from MIT) a team of professional computer engineers from a million dollar company can figure SOMETHING out. Can't they?

                I get it that current DCI rules might not allow this scheme. Why can't we figure that out, too.

                We already know where every pixel should start and we know where we want them to go and that doesn't change throughout the whole movie.

                That could be a way to find a wormhole through the asteroid field. Couldn't it?

                Comment


                • #23
                  Your "magic box" is the ICP. In series 2 the ICP design was all the same Texas Instruments board, which is why any ICP can operate in any manufacturer's series 2 projector (or so I'm told, never tried it). We're just now starting to see these kinds of advanced image processing features crop up in new machines because series 3 opened up the ICP design to the manufacturer, who could then build these features in. I suspect this is really just a case of adding a few graphics card type FPGA's to manipulate the image before it gets passed to the same old TI FPGA's, but that's just a guess. So more accurately, THAT'S your "magic box." Professional computer engineers at multi-million dollar companies have already designed, implemented, manufactured, and sold this in the form of the products mentioned earlier in the thread.

                  Now that I think of it, IMAX already solved this more than 10 years ago, using series 1 machines no less, with their Image Enhancer. Won't elaborate because I signed a thing.

                  You bring up an interesting point though: Could this be done in the IMB? Not in any existing one, but maybe an IMB could be designed with that bit of image processing built-in. Or MAYBE you could build a board that connects between the IMB and the backplane, Sonic & Knuckles style, handling your warping. The problem here would be the encryption. The board would have to spoof the IMB into thinking your board is an ICP, process the image, and then spoof the ICP into thinking your board is an IMB. Magic with a capital M, and trouble with a capital T, because at that point you have defeated encryption that is integral to DCI.


                  More on topic, I seem to recall the 45-degree mirrors used on Christie Duo actually being physically flexible, with a few threaded adjustments on the edges and corners that effectively warp the plane of the mirror for the exact purpose of improving dual-projector convergence. If you aim the projectors at each other, and use TWO 45 degree mirrors the vertical offset is minimized, really attacking the problem from the source. I don't have any hands on experience with this exact setup so I can't say how effective it is in reality.
                  Last edited by John Thomas; 12-30-2021, 08:23 PM.

                  Comment


                  • #24
                    Originally posted by John Thomas View Post
                    Magic with a capital M, and trouble with a capital T...
                    Yeah, I get that. No illusions.

                    "Magic Box" is just a way to conceptualize things while working out ideas.
                    Yes, I know that (A) it's an over-simplified paradigm and (B) that there are some really freakin' smart people who do a lot of freakin' hard work to make technology.

                    Back in my college days, my friend I told you about who makes hockey goal cams used to work on machine vision systems that were used to analyze Pap smear samples on microscope slides. The idea was to weed out the obviously good samples and kick the questionable ones back to a human for further analysis. I got to look over his shoulder and learn about the technology and, occasionally, pitch in with ideas. We also worked on voice recognition systems. Do you remember Dragon Dictate? We worked on stuff that came before that.

                    I could also do a little bit of coding and, occasionally, helped proofread code before it was submitted but that was nearly thirty years ago. Memory fades and technology progresses. Surely, I'm behind the curve after all these years but, still, I have some idea of what's involved.

                    I'd love to get back to programming and solving problems but, alas, the realities of life make that impossible. Maybe some day...

                    Also, the way I think seems to be different than most people. I start at the top of a problem, ask a question then break the problem down into chunks. I look at each chunk separately asking more questions, finding more answers and digging down, deeper until I get to the bottom of the problem. Then I build back up to the top of the problem until the whole thing is solved.

                    Sometimes, when digging down, you find that it's a bottomless pit. In that case, you have to climb back up, take a look at the problem from a distance and decide whether to shelve the project until later when you can learn more.

                    Unfortunately, I end up with a lot of "shelved projects" that way but, on the other hand, I also end up with a lot of complete projects where people say, "I never would have thought of that."

                    This question is the same, for me. I know lots of things about digital projectors but I don't know everything. I'm just trying to "connect the dots" between what I know and what I don't.

                    I am keenly aware that this could be another one of those "bottomless pit" projects but, like a Pit Bull, I don't give up easily. I have had shouting matches with bosses over just such issues but, sometimes months later, I often come back with a solution that works and the boss asks, "How did you do that?"

                    At Mercyhurst College, where I used to work, I wanted to make the house lights fade out instead of "snapping" out when the movie started. We only had a very simple 0-10 volt dimming control system and people told me that there was no way to do what I wanted. I pretended to listen to them but I went home, got out the books and manuals and studied. Several months later, I built a simple resistor/capacitor fader circuit, running on a 555 timer, built on a board about the size of a Post-It Note, spliced it in to the the control wire then hid it inside the junction box in the projection booth. Another wire ran between the TA-10 automation on the projector to trip the 555. It worked perfectly!

                    About a week later, I was standing at the back of the auditorium with my boss and I had a Work Study student running the projector. The lights faded down, my boss noticed and asked, "What just happened? Did Megan (student) do that?"
                    I said, casually, "No..."
                    He asked, "But, how?"
                    "Remember that circuit I asked you about last semester? The one you said wouldn't work," I replied?
                    He wrinkled his brow and said, "Uh-huh..."

                    I smirked, gave him the finger, walked away then went up to the booth and told Megan that the boss said she did a good job.

                    Caveat: I had a relationship with my boss where we could banter and razz each other like that. If I had tried a project like that and failed, he would have done the same to me and neither would have taken it personally. Don't try something like that if you're not tight with the people you work with!

                    Anyhow, that's an example of the way I work. I don't give up easily. I'm sorry if that mannerism ruffles feathers. It's not intentional.

                    It's just that my head will explode if there is a problem to solve and I'm not allowed to, at least, try to figure it out.

                    Comment


                    • #25
                      Originally posted by John Thomas View Post
                      Now that I think of it, IMAX already solved this more than 10 years ago, using series 1 machines no less, with their Image Enhancer. Won't elaborate because I signed a thing.
                      If I remember correctly, the "Image Enhancer" solved that problem using GPUs and not FPGAs. IMAXs solution was effectively, to call the whole box off-limits. Maybe IMAX tells exhibitors those stories to avoid anybody toying on their equipment, but I was told by one of them that IMAX told them, that the box was "boobytrapped media-block-style" and would drop its keys as soon as someone would try to open it. Apparently this design was good enough for the DCI-police, although IMAX's Digital Xenon is probably all but DCI compliant anyway...

                      Originally posted by John Thomas View Post
                      More on topic, I seem to recall the 45-degree mirrors used on Christie Duo actually being physically flexible, with a few threaded adjustments on the edges and corners that effectively warp the plane of the mirror for the exact purpose of improving dual-projector convergence. If you aim the projectors at each other, and use TWO 45 degree mirrors the vertical offset is minimized, really attacking the problem from the source. I don't have any hands on experience with this exact setup so I can't say how effective it is in reality.
                      Those mirrors are indeed adjustable on the edges, but that system isn't automated, at least not in the setup I've seen. The biggest advantage of the mirror setup is that, by using identical projectors and bringing their lenses as close together as possible, is indeed that the native geometry of both projectors will line up pretty decently already.

                      @Randy Stankey: Any good digital encryption scheme should generate data that's essentially indistinguishable from random binary data. As a matter of fact, you shouldn't even be able to see where one frame ends and the other one begins. Any warping algorithm will need raw pixel data to be useful. It also needs to know the coordinates of those pixels, as the transformation itself is entirely dependent on that information. So, no matter what, this algorithm needs to be fed with unencrypted pixel data and as such, it needs to run inside the "secure enclosure" of one of the magic boxes. Running it anywhere else is punching holes in DCI security and won't be allowed. Therefore, this code needs to be audited, to be sure this algorithm isn't secretly leaking unencrypted pixel data to the rest of the world and it's here where the problem is, because that process is tedious, expensive and not really accessible for people like us.

                      Comment


                      • #26
                        I must misunderstand something.

                        I thought a DCP file was essentially 9,000,000 uncompressed JP2K files displayed one after another. Are you saying that the whole metafile is encrypted in one shot? I thought it was each frame that was encrypted, individually.

                        That throws a wrench in the monkey works!

                        If I had a checkerboard of 64 squares (8 x 8) and that checkerboard needed to have combinations of red, green and blue checkers in the correct locations of every square in order to make one frame of a movie, I'd need 24 bags of checkers for each second of the movie. That would be 4608 checkers. (((8*8)*3)*24)

                        I thought that there would be 24 bags of checkers, each containing 192 checkers, all shaken up, with a padlock on each bag. There would be a slip of paper inside each bag with a map of where all the checkers need to be placed on the checkerboard.

                        In order to display the movie, you would have to use a key (the same key for every bag) to open up each bag in sequence, read the map, take the checkers out of the bag then place them on the board in the right order. Once that's done, you wipe the board clean, throw all the used checkers into a shredder then open the next bag and repeat the process. This is done for every bag of checkers, in sequence, until every bag has been opened and arranged.

                        Are you saying that there is only one bag, with one padlock, that contains 4608 checkers and you have to take every checker out of the bag before you can use a map to place them on the board?

                        Are you also saying that we aren't even told how many checkers are in that bag or what colors those checkers may be until after we have opened it and read the map?

                        If that's true, you are right. There isn't an easy way to apply a displacement algorithm to the checkers/checkerboards until the bag is opened and all the checkers are taken out and counted to be sure that we have all the checkers we need before we start. That's a really big, greasy monkey wrench!

                        I was working under the assumption that, even though we don't know the contents of any of the bags, we can infer what must be inside them. There must be a checkerboard of a pre-defined number of squares, in a pre-defined Cartesian grid. We must have enough checkers to fill every square on the board and we need enough checkers to do that 24 times. (Assuming that our movie is one second long.) We also have a pretty good idea of what the colors of those checkers need to be because the color space of our movie is also pre-defined. We can't have any checkers that aren't inside our color space.

                        We can use those assumptions and process of elimination to make the amount of work we have to do easier.

                        I assumed that an imaginary semi-truck delivered a giant box to your theater which contains 24 bags of checkers for every second of movie. All that you have to do is take out every bag and follow the procedure.

                        Now, what I think I am hearing is that it isn't a semi delivering a box but, instead, it's a dump truck that dumps a giant pile of checkers onto the floor and, not only is the number of checkers unknown, we don't even know what colors they are until after we have signed the bill of lading.

                        Yes, that makes things a lot more difficult. Not impossible. Just so difficult and time consuming that we'll never be able to sort out all the checkers in the allotted time it takes to play a whole movie.

                        I suppose we could create something along the lines of an Aristophanes' Sieve to shortcut the process but, in that case, we don't have enough memory or processor power to do that AND all the other things that the computer/projector needs to do in the mean time.

                        In that case, we are digging into a bottomless pit.

                        We'd be better off searching for buried treasure on Oak Island!
                        Last edited by Randy Stankey; 12-31-2021, 02:59 PM.

                        Comment


                        • #27
                          The data files are likely encrypted as a whole file. The SDI data stream is encrypted a different way. With an internal server or IMB that stream encryption is skipped.
                          The original ICP has the unencrypted image data and converts it to whatever the formatters understand. It would do a fair bit of image manipulation but not all projectors had access to it in their software although with the ICP/Enigma control program it can be used anyway.
                          For the "classic" IMAX digital projectors (Christie or Barco) the entire outer case is a secure enclosure wit tamper switches. The IE is also a sealed box. Unlike other digital DCI projectors where there's a marriage process only requires local access to the projector (not sure NEC even needs that...), you have to call IMAX and get a code to remarry the projector after opening the case - maybe only if you take projector boards out - if marriage is broken you can't fix it on your own.

                          Comment


                          • #28
                            Clearly, the whole file being encrypted makes my ideas a lot harder to implement.

                            I have made DCP packages, myself, and played them on a cinema projector. I understand the basic structure of a DCP but, except for ingesting, installing the key file(s) and running a show, I have not dealt with encrypted files. All the DCPs I made were done entirely in the clear. I wrongly assumed that the individual frames were encrypted separately, not the whole kit and caboodle. I never looked. I just assumed.

                            Cardinal rule of working with mission-critical equipment: Don't tinker with stuff you don't fully understand.
                            I just wouldn't fool around with encrypted movies where, if something unexpected happened, I wouldn't be able to play a show on time.

                            Working with my friend, I learned that QuickTime's "CoreImage" video framework has functions that can do affine transforms with video, on the fly.
                            If, for instance, you have a camera above the goal line, pointed at the opposite goal, the image of the rink will have a distorted perspective.
                            We know that a hockey rink has very specific dimensions and there are five faceoff circles that you can use as fiducial marks.
                            Knowing where those fiducials are supposed to be, you can input the video frame(s) plus the offset(s) to CoreImage and the QuickTime framework does the rest for you.

                            I thought that warping the image on a DCP system would be similar to what QuickTime can do on my home computer only bigger-better-faster-stronger.

                            This may mean that I have to shelve my idea but, maybe, an idea will pop into my head, some day.

                            Those kinds of "Aha!" moments are common for me.
                            Last edited by Randy Stankey; 01-02-2022, 04:03 PM.

                            Comment


                            • #29
                              Originally posted by Randy Stankey View Post
                              Working with my friend, I learned that QuickTime's "CoreImage" video framework has functions that can do affine transforms with video, on the fly.
                              If, for instance, you have a camera above the goal line, pointed at the opposite goal, the image of the rink will have a distorted perspective.
                              We know that a hockey rink has very specific dimensions and there are five faceoff circles that you can use as fiducial marks.
                              Knowing where those fiducials are supposed to be, you can input the video frame(s) plus the offset(s) to CoreImage and the QuickTime framework does the rest for you.

                              I thought that warping the image on a DCP system would be similar to what QuickTime can do on my home computer only bigger-better-faster-stronger.
                              There's lots of software out there nowadays that can warp images in real time on relatively benign hardware. But the "cardinal rule" of encrypted data is that you don't know what's in the secret container.

                              In the end, there are several points in the signal line from harddisk or SSD to the imager inside the projector, where the image needs to be decrypted AND decompressed, but DCI specifications only allow this to happen inside a "secure enclosure". As soon as you want to manipulate the data that goes on screen, you inadvertently need to punch some kind of hole in there, even if the data is given to you via a back-door.

                              If you had to do the same transformation on every pixel, then a possible solution may be, passing the pixels randomly, so your "insecure algorithm" doesn't know where the pixels go. While this would somewhat weaken the encryption, the data you'd get would still be largely useless But for a warping algorithm the coordinates of the pixel are an integral part of the algorithm.

                              Comment


                              • #30
                                I thought that there would be 24 bags of checkers, each containing 192 checkers, all shaken up, with a padlock on each bag. There would be a slip of paper inside each bag with a map of where all the checkers need to be placed on the checkerboard.
                                Its actually way more complicated. Each frame (of 4K content) is made of 6 tile parts three for the 2K image and three more for the additional (4K) image information. So in a two hour movie there is over a million "image segments" that need to be uncompressed, decrypted, color corrected and displayed. I am not saying they couldn't do warping if they wanted to but it certainly isn't trivial.

                                I was always under the impression the "industry" (studios) did not want warping capability because it had the potential to change what the film makers intended to be seen. Maybe the technology is so good now that is no longer the case but I see a lot of "AV" companies that just put up projectors and use whatever built in warping tools are available rather than try to get the physical alignment as close to spot on as possible first and it does show.

                                Comment

                                Working...
                                X