Welcome to the new Film-Tech Forums!
The forum you are looking at is entirely new software. Because there was no good way to import all of the old archived data from the last 20 years on the old software, everyone will need to register for a new account to participate.
To access the original forums from 1999-2019 which are now a "read only" status, click on the "FORUM ARCHIVE" link above.
Please remember registering with your first and last REAL name is mandatory. This forum is for professionals and fake names are not permitted. To get to the registration page click here.
Once the registration has been approved, you will be able to login via the link in the upper right corner of this page.
Also, please remember while it is highly encouraged to upload an avatar image to your profile, is not a requirement. If you choose to upload an avatar image, please remember that it IS a requirement that the image must be a clear photo of your face.
Thank you!
IAB is the open standard that DCI has accepted. It is mainly an evolved version of ATMOS with Dolby allowing the patents free use in Theatrical from my understanding. It encompasses the ATMOS implementation. So, in effect, IAB replaces ATMOS and is compatible with any ATMOS labelled cpl.
So, as a projectionist, think IAB is the same and replaces ATMOS when seen in the CPL name.
Other vendors other than Dolby have/will be coming out with immersive capable Processors/Renderers. They will play IAB and any originally ATMOS immersive content.
Well, that's how its supposed to work.
It's more than that. IAB = Immersive Audio Bitstream. It is a way of encoding the movie such that the particulars of the method are honored. Atmos, Auro, DTS-X and any others will use the IAB bitstream and your decoder will decode what is on the track, with respect to audio placement. It is a one-size-fits-all sort of thing rather than having multiple bitstreams. So, if the track has an Atmos mix, your Atmos decoder will get it just like with a dedicated track...the same with the other immersives. But, if the movie wasn't mixed for a feature of your particular system (e.g. "height" channels behind the screen), then your system will play what matches its system and it should not synthesize a channel/object that was not put into the track.
It also takes branding/trademark names out of the distribution name on CPLs.
Since all movies have "bed" channels (good old 5.1 or 7.1), all movies should play without losing key features like dialog. I'm not quite sure how it handles when an object goes to a channel not present in other systems (there could be a contingency for that such as generic surround that is present in all systems). For instance, in an Atmos, if an object runs down the LTS from front to rear...other systems don't have such detail...or Auro that has height surrounds...is there a means that non-Auro systems would fold those sounds into the main surrounds (as encoded on the track...not for the other processors to figure out).
I suspect that it may force all systems to put non-bed channels in an x,y,z space and based on that, the object is placed to the closest approximation the same way an Atmos system does using the available speakers (why more speakers/amplifier channels are better). So, if it is an Auro only mix and the object or array is on say the right-height surrounds, an Atmos system would see the object as on the right wall and put it on the speakers it has without height. To be clear, I have not red the document specifying IAB so the above (on how it works) is speculation on my part.
Other vendors other than Dolby have/will be coming out with immersive capable Processors/Renderers. They will play IAB and any originally ATMOS immersive content.
Are you sure about that? Genuine Atmos CPLs require a separate KDM for the Atmos audio: without it, you can only play the regular 5.1 or 7.1 (roughly equivalent to having a 35mm print with SR-D on it, but your projector only has an analog SVA reader). Will Dolby issue KDMs to play Atmos tracks on non-Dolby processors, and even if they will, are Barco Auromax and/or DTS:X processors (for example) able to decode proprietary Atmos tracks as distinct from generic IAB?
Yes, Leo. The whole idea of introducing IAB was to enable this level of compatibility, and Dolby was part of the discussions and agreements. They may still offer added value to genuine ATMOS systems, without breaking the compatibility. Also, KDMs for IAB or ATMOS systems are not generated by Dolby, they don't rely on Dolby being involved.
Last edited by Carsten Kurz; 01-05-2024, 04:30 PM.
Is a separate KDM required for IAB if it is rendered in the digital cinema server (as was originally done by USL and is now also done by Dolby)? With an external processor, the encrypted content has to be sent to that processor where it is decrypted and rendered to multichannel audio and watermarked. If it's all done within the security boundary within the media block, is a separate KDM required?
I was on the SMPTE committee developing IAB. Originally, we were looking at the systems from Dolby and from DTS, trying to take the best features of each and come up with a new bitstream. Eventually, the committee was directed to use Atmos as the starting point and come up with a bitstream based on that. Dolby provided a starting document and we derived the standard from that. A few features were added. I spent a LOT of time trying to clarify the language so it could be interpreted by someone without inside knowledge of how the system worked. Also, the system is "forward looking" in that there are many features that have not been implemented by anyone, including Dolby. That is why ISDCF came up with a list of which features were supported by all playback systems. That is now a SMPTE RDD (Registered Document Disclosure).
An interesting distinction between the Dolby and DTS systems were that Dolby uses XYZ cooridinates that are scaled to the auditorium (0.0 to 1.0 along the screen, the side walls, and vertically). DTS, instead, used polar coordinates based on a reference listening position. When a DTS renderer plays an IAB bitstream, it does the rectangular to polar conversion.
An interesting distinction between the Dolby and DTS systems were that Dolby uses XYZ cooridinates that are scaled to the auditorium (0.0 to 1.0 along the screen, the side walls, and vertically). DTS, instead, used polar coordinates based on a reference listening position. When a DTS renderer plays an IAB bitstream, it does the rectangular to polar conversion.
It's been a while since I've looked at this. As a matter of fact, back in 2008 I was involved in the development of an object based sound system myself, with the primary application in the leisure industry, like theme park attractions. That idea eventually crashed and burned... When Dolby started to hint at their "revolutionary Atmos system", it was pretty clear what it was. By then, there were already some competing systems on the market, alas most still in beta.
But I prefer the polar coordinate system used by both the original DTS:X implementation and IOSONO's WFS system, compared to the xyz-system used by Atmos and the original IMM Sound implementation (I think Atmos largely used the IMM Sound bitstream specifications, didn't they?). It's clear that Atmos was delivered with the setting of a cinema in mind, while DTS:X is a much more scaleable approach that doesn't fix the format to a certain assumption of how the playback room looks like. I've also not been a fan of the idea of the "surround bed", I rather preferred this to be implemented as a virtual, planar sound source instead of something that eventually maps to a fixed, pre-definded array of speakers.
I didn't like the polar coordinate system since it was "egocentric" (from a certain listener's perspective) instead of "allocentric" (which, to me, concentrates on the location of the sound instead of the direction perceived by an individual listener which changes based on where the listener is, though the sound source has not moved). Somewhat related is how levels should be set. I think levels should be set based on a set distance from each loudspeaker instead of being the same at the reference listening position. As I've used as an example, imagine a bee buzzing along the left side wall of the cinema. As the bee flies to the closed position along the wall to the listener, the buzz gets louder because the distance is shorter, not because the bee is buzzing more loudly. With reference position levels, the recorded buzz level has to be reduced at the distant loudspeaker to make it correct. If we used the same recorded level on all the loudspeakers, the buzz level would not change as the bee got closer. I think the buzz should be recorded at the same level no matter where it is. But I lost that argument.
Wavefront Synthesis is very interesting. I have not studied it extensively. But, for sounds outside the auditorium, I can imagine the wall covered with loudspeakers on the inside of the wall and covered with microphones on the outside. As the sound arrives at the microphones, the wavefront is reproduced inside the auditorium as if the wall were not there. Record it and play it back, and the sounds are reproduced. I don't know how you reproduce sounds that occur within the auditorium, though.
For individual immersive sound, the Smyth Realiser looks interesting. I have not heard one, but have been told it is very good. It includes head tracking and the individual's head related transfer function to pretty much reproduce an acoustic environment.
I didn't like the polar coordinate system since it was "egocentric" (from a certain listener's perspective) instead of "allocentric" (which, to me, concentrates on the location of the sound instead of the direction perceived by an individual listener which changes based on where the listener is, though the sound source has not moved).
In my point of view, if you render an "egocentric" soundtrack in a room that has multiple listeners, like a cinema auditorium, you take the center of the "listening platform" as your origin. This also was the basis of the format which we came up with. The listening platform could be anything: A single listener with headphones, an auditorium, a waiting room or even a moving ride vehicle. In my opinion, a true object based sound format should be largely independent from the playback environment. It's the task of the rendering audio system to recreate the track as faithful as possible, given the limitations at hand.
Somewhat related is how levels should be set. I think levels should be set based on a set distance from each loudspeaker instead of being the same at the reference listening position. As I've used as an example, imagine a bee buzzing along the left side wall of the cinema. As the bee flies to the closed position along the wall to the listener, the buzz gets louder because the distance is shorter, not because the bee is buzzing more loudly. With reference position levels, the recorded buzz level has to be reduced at the distant loudspeaker to make it correct. If we used the same recorded level on all the loudspeakers, the buzz level would not change as the bee got closer. I think the buzz should be recorded at the same level no matter where it is. But I lost that argument.
Actually, I completely agree on your argument here.
Wavefront Synthesis is very interesting. I have not studied it extensively. But, for sounds outside the auditorium, I can imagine the wall covered with loudspeakers on the inside of the wall and covered with microphones on the outside. As the sound arrives at the microphones, the wavefront is reproduced inside the auditorium as if the wall were not there. Record it and play it back, and the sounds are reproduced. I don't know how you reproduce sounds that occur within the auditorium, though.
WFS requires quite a bit more rendering power than simply panning a sound between speakers and rescaling the waveform amplitude for each speaker before mixing it into the total output of the speaker channel. You essentially need to recreate the original wavefront for every sound for every speaker, using spherical waves as primitive. WFS would've been "version 2" of our implementation, "version 1" would be the "Atmos way", where it is essentially impossible to create sounds that originate from within the boundary of the "speaker layer"... we had a specific name for that back then...
I've heard the IOSONO system before Barco effectively killed it, actually, I believe that millions of people have since heard it, without knowing it. It used dense speaker arrays placed around the auditorium. It could create realistic sounds within the horizontal plane that seemed to originate from within the auditorium. I think one of the last remaining prototype installations is inside the Haunted Mansion dark ride in Florida inside the stretching room scene. This one used pre-recorded tracks per speaker using simple "binloops" per speaker though, so no dynamic rendering.
For individual immersive sound, the Smyth Realiser looks interesting. I have not heard one, but have been told it is very good. It includes head tracking and the individual's head related transfer function to pretty much reproduce an acoustic environment.
Interesting stuff indeed. I guess a lot more can be done with headphones, as it overcomes a lot of the limitations and challenges of "room sized" audio systems. Apple also claims to have included "Spatial Audio" with head-tracking into their headset, including Dolby Atmos support...
Yes. However, if your Trinnov is the 24 output channel model, it would integrate into a potential upgrade path to immersive audio for your room, if you ever wanted to do that.
Sometimes we get a 5.1 OV and an ATMOS/IAB+7.1 VF, but no separate 7.1 VF (typically, this happens with Sony titlles).
While we don't have any ATMOS or IAB system, we usually get ATMOS/IAB KDMs as well, and our Sony plays the 7.1 of the ATMOS VF happily, ignoring the ATMOS/IAB part.
Sometimes we get a 5.1 OV and an ATMOS/IAB+7.1 VF, but no separate 7.1 VF (typically, this happens with Sony titlles).
While we don't have any ATMOS or IAB system, we usually get ATMOS/IAB KDMs as well, and our Sony plays the 7.1 of the ATMOS VF happily, ignoring the ATMOS/IAB part.
Yeah I would think this is what the bedding accomplishes... the backwards compatible standard surround configurations within both ATMOS and IAB dcps. If you have a key it should play what your system is capable of, assuming the right processor preset. Although I'm unsure if 5.1 and 7.1 systems playing such files would sound as intended. If it was bedded with 7.1 data presumably it would be equivalent to playing a 7.1 DCP with 5.1 selected on the processor. Not "quite" the same as playing the intended 5.1. But I don't know, maybe the backwards compatibility accounts for that possibility? Or maybe all 7.1's only "add" rear mixes, rather than alter the mix of the Rs and Ls surrounds too?
Note that the immersive sound bed tracks are in the IAB bitstream, not in the 16 channel main sound. They are packets of data for each editable unit (typically 1/24 second) and include a loudspeaker destination instead of immersive parameters (like XYZ location). There are separate packets for sound fragments that are pointed to by bed or object packets.
Many DCPs include a 5.1 or 7.1 "backup track" in main sound. SMPTE has an RP that recommends against this since people may THINK they are playing IAB (or Atmos) when they are actually just playing 5.1. I think Dolby typically includes the main sound as a backup in the same way digital sound on film could fall back to analog sound.
Comment