Announcement

Collapse
No announcement yet.

Q-SYS Corner

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Q-SYS For Cinema
    Blog-7, QDS–Part-3, Sample 7.1 Part-2, Audio Flow Part-1, Inputs

    1/22/25

    Current QDS Versions: 9.13.0 and 9.4.8 LTS.
    Sample 7.1 design version: 4.2.0.0
    Introduction

    In the previous Sample 7.1 Blog, I went over the general overview of a Q-SYS design for cinema.
    In this Blog, I will start the process of going through the audio path of the Sample 7.1 design. If you don’t have that design on your computer, you should check out my previous blog as it will describe how to get the design installed on your computer.
    In addition to describing what is going on in the design, I plan on also posing alternative ideas. Part of my point will be that you can always build off of what has already been done and make changes that suit your needs. There isn’t any single “right way” of doing a Q-SYS design (providing that it actually works). In fact, I’m hoping to show these alternative designs so you can see the pros and cons of some decisions.
    If any of the content in this blog happens to show up in a Q-SYS exam, it is not my intention to provide an answer-sheet beyond the discussion of good practice. I have not seen any form of the cinema final exam (my Level-1 was before there was a cinema version).

    Disclosure
    I do not, in any way, work for QSC/Q-SYS. These thoughts are my own based on my own interactions with the product(s) and implementing Q-SYS within actual cinema environments. I do work for a dealer that has sold QSC products since the 1980s, including Q-SYS and its predecessors. For the purposes of this blog, I represent only myself and not my employer(s) or any other company.

    Target Audience

    I am targeting these blogs towards the cinema community, in general. As such, in 2025, there is going to be a rather wide range of knowledge from the uninitiated to the familiar, with respect to Q-SYS. The way one should think about Q-SYS is significantly different than working with purpose-built cinema processors. However, before long, you can have the same speed and familiarity you have become accustomed with other sound processors.
    As such, I’m going to go a bit slower, particularly at first, since, for the new comer, all of these pieces are going to be new. For those of you that are already familiar with Q-SYS and have done your own designs, you can skip past the parts that you already know.
    For those that this is new or relatively new, I’ll probably show things in a bit more detail than just stating what the component does.
    There is a presumption that you have either:
    • Gone through the Q-SYS Level-1 online course.
    • Are currently taking the Q-SYS Levl-1 online course.
    It would be impractical for me, in a blog form, to cover all of the minutia (with respect to working within QDS), that are already covered, very well, in the Level-1 videos.
    Link for Q-SYS Level-1 Cinema training

    Inputs
    As the song goes, “Let’s start at the very beginning.” So, in the upper-left of the design is the “Inputs” group.

    DCIO-H
    I have already written a blog on the DCIO/DCIO-H and will refer you to that one for details on it. It is going to be one of the most common Q-SYS peripherals in cinema systems. You will need one per screen. The choice of whether or not to get the “H” version for HDMI is dependent on the needs of that particular screen within the cinema complex.
    There are four pieces to the DCIO-H within the QDS design.
    1. The Digital In.
    2. Analog In.
    3. GPIO.
    4. Status.
    You can bring, (or not) any/all of those pieces in, as your needs dictate. It is worth selecting each section, one at a time, and observing the Right-Side Pane (RSP) to see what the properties are for each.
    For example, there are a LOT of potential control pins we could expose, if we need them for part of our design.

    Picture1a.jpg

    We need to decide if we are running a redundant network and, if so, declare it on the RSP and we should also specify its physical location so all of the Auditorium 1 things are grouped together in the inventory (and in other parts of Q-SYS).
    Most peripherals have an “Is Required” setting. If that is set to yes, then if the Core finds it to be missing, it will come up as an error. The DCIO-H is definitely required. However, if you had a lectern that gets plugged in and within that you have a Touchscreen, you probably don’t want that to be “required” because you don’t want the system to be showing an error just because the lectern is in storage or in another theatre.
    The “Analog In” will have far fewer control pin options but they are specific to the “Analog In.”
    For the “GPIO” portion, you should note that you can choose to expose, or not, any category of pins that are not relevant to your design. What is different about this is that audio components normally default to not showing control pins but the “GPIO” defaults to showing them. In the Sample 7.1 design, they have the GPIO pins set to “No.” If you want/need them, change that to yes and see how it changes in the Sample 7.1 design. If you are not using something, save on clutter by not showing unnecessary things.
    Let’s look inside of the “Digital In.”

    Picture2.jpg

    What do you find inside there that would be handy to have outside so you don’t need to open it later? In my opinion:
    • Status LED.
    • HDMI Enable/9-16 Enable.
    • And possibly the Audio Format.
    Those are things that will aid you during troubleshooting since they are “at a glance” sort of things.

    Picture3.jpg

    (I have resized and labeled/colored the buttons to better fit and make clear what they are doing).
    With those on the outside, we know that it is in DCP mode (using 9-16 rather than HDMI) and that its status is “OK.” Conversely, if it was in HDMI mode, that grey box would be showing the HDMI audio format. If it had just a dash (-) we would know why there is no sound…it isn’t getting an HDMI audio signal.

    After initial configuration, what other reason is there to open that component? None other than to see the input meters. That is going to be exceedingly rare as you should have some form of metering or signal presence in your design, elsewhere. I would advise against dragging its input meters out since, as we’ve already discovered (in the previous blog), LSP items like the DCIO-H, do not copy from design to design. So, if are going to copy your design from screen to screen or complex to complex, that just becomes more work to duplicate. You would be better served by adding a meter component so that it will copy.

    In fact, about the only thing inside the DCIO-H component you really need to consider adjusting is the “Upmix” dropdown (for HDMI sources only). If you have a 7.1 system, setting this to 7.1 will have the upmixer avoid things like the 2.0 mixes from only playing from Left and Right…which creates a big hole in cinemas. It is better to have a “Prologic” decoder create a Center (and surrounds) out of that. Using 7.1 will ensure that the typical cinema speakers will get a more traditional channel assignment. Since we don’t have a physical DCIO-H in emulation mode, you won’t be offered the upmix options. However, if you did have a real DCIO-H to connect to, it would look like this:

    Picture4.jpg

    Key Tip: Help <F1>
    While we’re at it, this is a good time to get acquainted with the F1 button on your keyboard. That is the hot button for “Help.” If you select any of the DCIO-H components in the design and press <F1> (or go to the Help menu…it is the first option), it should open your default browser and pull up the Help page for the component you have selected. It is like having a custom instruction manual at your fingertips.

    If you selected one of the DCIO-H’ components when pressing <F1>, it should have opened the DCIO-H’s Help page. If you open up the Control’s section, it will give you information on how the Upmix works as well as other information on the DCIO-H.

    If you open the “Control Pins” section, you can see what is available there and how they will respond to components that work with Values, Strings, or Position (how a component’s control pins behave are dependent on what they are connected to; almost all pins have to consider how they will behave with those three categories).

    [Blog 7, Page 1 of 3]
    Last edited by Steve Guttag; 02-02-2025, 10:57 AM.

    Comment


    • The Start of How-To Create An Alternative HDMI Upmix (Example)

      I’m going to take a brief detour to bring up why you might want to expose control pins on components so you can do more with a component than just a canned system.

      These are just the beginning steps of how to make an HDMI format decoder. There could be an entire discussion/blog on how to make your own upmixer.

      Let’s say you don’t like the way the Upmixer handles various formats and you want to roll your own. How might you do that? The easy part is the actual 2:4 decoder since Q-SYS has one of those in the RSP components. It is called the Active Matrix Decoder so it is best found using the search…or just copy the one that is already in the design. But how do you know if you need to use that or if you just need to pass-through, say, a 5.1/7.1 audio?

      If the HDMI signal identifies itself as having a Center channel (in its metadata), the DCIO-H’s “C” LED will light and its control pin will go high. Likewise, if you have content with 7.1, you will have both Left Back (Lb) and Right Back (Rb). Expose those pins (you can use either Lb or Rb since there isn’t a situation where only one will be declared active…these are logic pins, not signal level):

      Picture5.jpg

      With “C” and “Lb” status pins you can set yourself up with a truth table of what format needs to be decoded and what channels to send to where. I’d add the “HDMI Enable” pin too since we only care about decoding/upmixing when HDMI is selected.

      Picture6.jpg

      This is how it might start to look if you use a Container to hold your own HDMI decoder. As you look at other components in the design, take a moment to see what control pins might be available to you and how they might allow you to make your design fit your needs a bit better.

      DCIO-H Audio Wiring.

      Okay, we know that we want to minimize repeat work and that we might want to add some input meters. What can we do lessen the work down the road?

      Picture7.jpg

      We probably don’t need to worry about AES15 and AES16, at least not at this time. They were an alternate location for HI/VI channels. Note, they are the jagged edge Signal Names. This indicates that they are not mated up elsewhere in the design. So, we can safely delete those.
      How can we keep our wiring but eliminate the issue with copying? Signal Snakes. Let’s add Signal Snakes to keep our rework to a minimum. How can we create Signal Snakes?

      Key Tip: Signal Snakes
      Signal Snakes (note, I may refer to them as Wire Snakes too) are located in the RSP. Just search for “Snake.” However, there is a shortcut! If you select the pins you want to wire to a snake, then drag those pins like you are dragging wires (the wires will start to form), press the <space bar> and presto…the right sized snake, all wired up!
      Here is the process for the DCIO-H, presuming we are going to create two 8-channel snakes (we could do one 16-channel snake but we have more flexibility if we make the two since the needs of the HDMI audio may be different than for the DCP audio.
      1. Delete the existing wires from the DCIO-H to the “Routing” block.
      2. Select the first 8 pins (including the two that have Signal Names AES7-A1 and AES8-A1). You can do this one at a time with <shift-click> or by LTR (Left-to-Right) dragging or even by RTL (Right-to-Left) dragging.
      3. Then <click-hold> and drag the top pin until wires form (and hold).
      4. Press the <space bar>.
      If you got it right, it should look like this:

      Picture8.jpg

      Repeat for the bottom 8 pins.

      Picture9.jpg

      Now, over on the “Routing” component, repeat the process.
      Let’s move those Signal Names for the ADA channels over to the Signal Snakes feeding the “Routing” component using <CTL-X> and <CTL-V> to cut and paste the Signal Names to where we want them:

      Picture10.jpg

      We’re still not done because we need to connect the snakes. However, it might be better to use Signal Names rather than wires (it is our choice) so we can have repeat destinations, if we want to add meters or something. Additionally, we should shift the location of the DCIO-H input components and maybe tweak the group box size a little, so things will fit comfortably:

      Picture11.jpg

      Now, if we copy the design (for additional screens), the only thing we need to do, as far as Digital In goes, is add the DCIO-H and drag its wires over to a Signal Snake. Those ADA Signal Names will also copy fine and will automatically update to “A2” on the next copy.

      By this same logic, we should also add Signal Snakes to the Analog In as well (and possibly the Audio Player). Since we’re just using the Snakes to preserve our Signal Names, we can put the Snakes head-to-head and connect with a simple wire:

      Picture12.jpg

      Note too, if you do not have the DCIO-H but just the DCIO, you could skip everything that pertains to the HDMI audio path and just wire up pins 1-8, 11, and 12.

      Key Point: You only consume input channels if you actually connect wires to the input pin. So, you only use 16 inputs if you actually run wires to all 16. If you only run wires to 10 inputs that you need, the empty pins don’t count towards your channel count limit. If you are placing more than one screen on a Core, this could be very important since, in cinema, input channel count is often the limiting factor on the Core’s size.

      Input Meters
      Since I brought up meters on the inputs, here is how you might add them. I would bring an empty container into the design and declare it without input or output pins. I would then bring in “Level Meters” for both DCP and HDMI (declare them as multichannel with 8-channels each).

      Then, wire the meters with Signal Snakes. Copy the Signal Names from the main schematic to the Signal Snakes on the Meters:

      Picture13.jpg

      Then, I would copy the meters to the schematic and resize them as horizontal meters (if you make a meter wider than it is tall, it will, automatically, convert to a horizontal meter). The final product would look something like this:

      Picture14.jpg

      You’ll know, at a glance, if anything is coming into the system via the DCIO-H’s Digital In component. If you want, the same can be done with the other inputs (e.g. Analog In) but I’ll leave that up to you to decide.

      Hopefully, you can see how using Signal Snakes can make adding something like these meters far easier than just using wires that run everywhere. And, if we duplicate this Screen, these meters will copy/paste just fine and their labels will update to A2 as well.

      Analog In
      The Analog input has the typical inputs of a Cinema Processor for Non-Sync and a Microphone. The Non-Sync inputs are on a stereo 3.5mm (mini-phone) jack. The Microphone is via XLR. Note, the XLR input can accept both microphone level as well as line level audio, so it is a bit of a universal mono input.
      Let’s look inside it a bit:

      Picture15.jpg

      We probably don’t need to drag its status LED out since we have that information right above on the Digital Input (the status is a duplicate). If you are concerned about clipping any of the inputs you might want to drag the “Clip” LEDs out. Depending on your installation, you might want to have a whole microphone calibration set up on your UCI (User Control Interface). For now, I’m not going to bring any of the controls out.

      [Blog-7, Page 2 of 3]

      Comment


      • You will need to set the Microphone Preamp Gain based on the mic(s) you are using. This level should be done with this component and not with a digital gain downstream. It is fine to add a fader downstream to balance things out but in terms of getting the microphone’s gain structure right for the system (good level, without clipping), that needs to be done here on the Analog In.

        The Left/Right Non-Sync inputs have a cruder Preamp Sensitivity…it is just a high/low setting. Use whatever is appropriate for your source (good level, without clipping).

        Key Point:
        With analog audio, you want to add as much gain as you will EVER need as early as possible. This is why I’ve stated that you should set the Preamp Gains here and not with other gains downstream. This will result in the lowest noise and best quality audio. Once you are within the digital domain in Q-SYS, it really doesn’t matter (that much, if at all) where you add/remove the gain to balance things out. Analog inputs are one of the few exceptions to my recommendations of not to adjust audio within an input component.

        Audio Player
        Every Q-SYS Core is capable of storing audio. As such, it can be your music source as well as pre-recorded messages or even just sound effects, if you want to liven up you UCIs. If you want to come up with a snazzy “Red-Alert” sound for when a fault is detected within the system and play out of the booth monitors, go for it (just know, cute things can get annoying, after they’ve gone off 1000 times, so think about the sound and the likelihood of it going off a lot).

        Of course, if you use your Core as an audio player for music, you will have the overhead of uploading that content, selecting it and playing it (as well as deleting off stale content). It could prove handy if you want to keep seasonal music that you can switch in/out and nothing is stopping you from using a mix of analog inputs and the Audio Player.

        The Audio Player has quite a few controls that one might want to drag out into a UCI if you are providing that functionality to the manger/projectionist:

        Picture16.jpg

        To get more detail on the Audio player, select it in the schematic and press <F1> to open the help file on it. There you will find its capabilities (file types too).

        You can have more than one Audio Player in your design. You are limited by total track count (16 is the standard non-upgraded amount) and storage space. This will vary by Core and QDS version so always check with the Help file on the QDS version you are using. There are both Media Drive upgrades (more storage) and Mult-track upgrades (more total tracks). You should check pricing if you are planning on using the Audio Player beyond “typical” basic playback.

        So, if you are using a Stereo Audio Player for music, you still have 14-tracks remaining that you can use for other things.

        For a couple of videos that discuss the Audio Player in more depth, including how to get audio files up to your core, check out this video (and the one that comes right after it):
        Audio Player Video
        Audio File Management Video

        Cinema Pink Noise

        Picture17.jpg

        Somewhere in your design, you should have a Cinema Pink Noise Generator since you will need to tune the room, somehow. Where it is located will depend on your design. Often, you don’t need more than one because you are not going to need multiple instances of pink noise playing at the same time. You can route one Pink Noise generator to as many signal paths as you need using a Signal Name. However, I’m sure we’ll find that due to how they are routing the pink noise in this design, we will need to have one per screen or they wouldn’t have appended the “A1” to both the output and the Mute control pin.

        Key Point: There is more than type of Pink Noise component available. For cinema, we always want to use the Cinema Pink Noise version. The difference is in the crest factor of the noise. In order for the various test equipment to read the same from system-to-system and match what was used when the film was mixed, we need to use, as close as possible, the same reference signal. Additionally, the level needs to be set to -20dBFS when setting the speaker playback level with your SPL meter.

        In my designs, I will create a separate Snapshot just to send the Pink Noise Generator to 20dBFS. This does two things.
        1. It allows me to set the pink noise at reference quickly.
        2. It provides a visual confirmation of its level without having to open the component.
        Now, we could drag the level out so we know what the level is and adjust it, if necessary.

        Picture18.jpg

        And, if you prefer, we can select that “knob” and turn it into a text display using the properties on the RSP. They will function the same and you can key in your desired level.

        Picture19.jpg

        Why would you want to adjust the pink noise level in the first place? I will, often, start a new system with the pink noise set to -40dB or so until all speakers have been tested to ensure that nothing was hooked up wrong (HF components, connected to amplifiers playing LF signals). Also, you don’t want to blast anything. So, if you left any level controls for your “B-chain” at “0dB” that may be a bit loud. Starting the pink noise a bit low, at first, is just a safety thing.

        So (optionally), if you want to create a Snapshot that sets the pink noise to -20dBFS (reference), create a new Snapshot and label it appropriately. Drag in just the pink noise level control. You only need 1 preset in it (unless you want to set up a couple of levels).

        Picture20.jpg

        While the pink noise level is at -20dB, “Save” that to preset 1. You can then drag out the “Load 1” button and give it a suitable name:

        Picture21.jpg

        Let’s say you adjust it down for initial testing to -40dB:

        Picture22.jpg

        Notice how the button went dark? This lets us know that the pink noise level is not at reference. You can copy/paste this button anywhere in your schematic that it will be handy during your testing (near the EQ, for example). The same goes for the level adjust. You don’t have to scroll around, if you don’t want to. Just copy and paste elements to where they suit your needs.

        Now, as we will discover in a future blog, the Pink Noise component is included in the “PINK-A1” Snapshot bank so, by virtue of recalling its presets, the pink noise level will adjust back to ‑20dB that way too. There is nothing to stop you from using the same schematic components in multiple Snapshot banks. So, you might have to think things out a bit since you might have two Snapshots that “fight” each other.

        Loudspeaker Monitor

        Picture23.jpg

        The Loudspeaker Monitor component is a LSP component since it pertains to the Core itself so there can only be one in the design (and it doesn’t copy/paste from auditorium to auditorium).
        In simple terms, it is the amplifier output monitor. As we are accustomed to within cinema, we want to be able to monitor the output of the amplifier to know that the audio signal is getting all of the way out to the speaker. If you hear the audio in the amplifier out but not in the theatre, then we know that either the speaker has failed or a wire has become disconnected (which should trigger a load-fault too).


        You can only monitor one output at a time, with the exception that if you have a multi-way speaker (2-way, 4-way…etc.), you can monitor a summed output.

        Another catch is, it only works with Q-SYS native amplifiers (DPA-Q, CX-Q, SPA-Q) and Dataport amplifiers (DCA, CX) with Dataport cards mounted in a suitable I/O frame (now discontinued) or the Core 510i. Both the amplifier and speaker components have the “Listen” button that feeds the Loudspeaker Monitor component. While they are equivalent buttons, only on the speaker component can you monitor all sections of the multi-way speaker as a summed signal. Listen buttons are mutually exclusive. But you can see that between the speaker and mating amplifier components, they track together.

        Picture24.jpg

        If you were to try and select the speaker from the amplifier side, in the example above, you’d only be able to select Channel A or Channel B.

        Note, you can set the “Listen” level for each output, if you want to balance out the response.

        Since there is just the one Loudspeaker Monitor, you may have some overhead, if you are using one Core for multiple screens to ensure that one is always listening to the theatre you are working with. For instance, if the last person was monitoring the Center speaker in theatre 1, if you were to go to theatre 2 and send the Loudspeaker Monitor to the booth monitor output, you’d still be listening to theatre 1 until you changed the selection.

        Conclusions

        We’re on our way to understanding, and possibly modifying, the Sample 7.1 design. Hopefully, the concepts of those input components make sense. Don’t be surprised if the design evolves a bit more as we go and some of the things discussed in this blog continue to evolve too. Depending on where you are reading this (if comments or replies are allowed), you are welcomed to ask questions and start a dialog.
        ©2025 by Steve Guttag

        [Blog-7, Page 3 of 3, End of Blog]

        Comment


        • Update for the IMS3000 plugin:

          image.png

          Comment


          • Cool beans. 3 for 3 of my requests. Now to see how well they implemented them! My IMS3000 "companion" module can get smaller and smaller!

            Comment


            • Q-SYS For Cinema
              Blog-8, QDS–Part-4, Sample 7.1 Part-3, Audio Flow Part-2, Processing Part-1 Routing
              2/23/25

              Current QDS Versions: 9.13.0 and 9.4.8 LTS.
              Sample 7.1 design version: 4.2.0.0


              Introduction

              In the previous Sample 7.1 Blog, I went over the inputs of the Sample 7.1 Design.
              In this Blog, I will continue through the audio path of the Sample 7.1 design and move into the “Processing” section. If you don’t have that design on your computer, you should check out my previous blog (Blog 6) as it will describe how to get the design installed on your computer.
              In my opinion, this section, what they are calling “Processing,” is probably the most critical part of your design. This where you will set things up to accomplish the functionality the system needs. If you think about every cinema processor you’ve ever encountered (analog or digital), it is in this section that your likes or dislikes were generated. If this is your first time working with a drag-and-drop type DSP, this may be your first time at having your hand at being the “architect” of a cinema processor.
              No doubt, you’ll find that your ideal design will evolve as you see how the real world interacts with your design. In this section, as much as any, I’ll present alternatives or analysis. This is not a critique of the design before us so much (well, in some cases it might be) as it is a means of showing you different strategies.
              And now for the obligatory disclaimers/disclosures:

              Disclaimer

              If any of the content in this blog happens to show up in a Q-SYS exam, it is not my intention to provide an answer-sheet beyond the discussion of good practice. I have not seen any form of the cinema final exam (my Level-1 was before there was a cinema version).

              Disclosure

              I do not, in any way, work for QSC/Q-SYS. These thoughts are my own based on my own interactions with the product(s) and implementing Q-SYS within actual cinema environments. I do work for a dealer that has sold QSC products since the 1980s, including Q-SYS and its predecessors. For the purposes of this blog, I represent only myself and not my employer(s) or any other company.

              Q-SYS Level-1

              There is a presumption that you have either:
              • Gone through the Q-SYS Level-1 online course.
              • Are currently taking the Q-SYS Level-1 online course.
              It would be impractical for me, in a blog form, to cover all of the minutia (with respect to working within QDS), that are already covered, very well, in the Level-1 videos.
              Link for Q-SYS Level-1 Cinema training

              Processing

              So, in this blog, we’re going to explore just the audio portion of the “Processing” section of the Sample 7.1 design. The control portion will be handled in a later blog. And, within processing, just the “Routing” component (and related) is going to be discussed.

              Signal Path Quick Overview

              We start with a Router, which chooses what input we’re going to use as a source. That sends the audio over to a fader (volume control). The audio then heads over to a delay for achieving lip-sync and finishes with a dedicated surround level-set for achieving the proper balance when going between 5.1 and 7.1 mixes. There is also a Bypass mixer (very important), though not strictly required for a complete cinema processor.
              The signal flow is very linear and easy to follow.
              The other audio parts include:
              • The Active Matrix Surround Decoder (more on that one later).
              • Assistive Audio (would be just as appropriate in the Output section).
              • Meters (using Signal Names provide meters for whatever is selected).
              • Booth Monitor (it isn’t really processing but it is as appropriate here as anywhere else in the schematic. I tend to have the Booth Monitor in its own group-boxed off section unto itself).
              The key parts of every cinema processor design are:
              • Source selector (called “Routing” in this design).
              • Fader (called “Master Fader” in this design).
              • Tuning/Equalization/Level balancing.
              It doesn’t sound so complicated, does it? It really boils down to three(ish) things but how you choose to implement those things and how you feed them or take sound from them can make all of the difference in usability (combined with your control system/logic).
              Some of the images below are from the design that we modified in Blog-7 but everything applies to the unmodified version too.

              Source Selection (Routing)

              Picture2.jpg

              For a video description of how the signals are flowing through the router please follow the links below (it’s part of the Level-1 cinema training course).

              Router Signal Flow Video

              You will need to choose how to handle multiple sources…unless you JUST want one source (DCP). Typically, two specific components are up to the tasks. The Audio Router and the Matrix Mixer. The Sample 7.1 Design uses an Audio Router.
              The Audio Router has a lot going for it. It is pretty simple and easy to follow. Double-click on it and see what is inside. In the example below, I have the design emulating and “Feature 7.1” is selected. If your design is not in this condition, you might see something different.

              Picture3.jpg

              Inputs 1-6 go out through Outputs 1-6. Additionally, Inputs 11 and 12 go out of outputs 7 and 8. If you are familiar with the channel arrangement on DCPs, you would recognize that these are main audio channels, including the 7.1 surround channels. Inputs 7 and 8, the ADA channels (HI and VI-N), are not being routed by the “Routing” in this design. Can you see how that design choice might trip you up later on? What if you get another source that has ADA channels too?
              You should press the other formats and see how the Router changes so you can see what it is doing and how it is handling each format. Note, that, from the Router’s perspective, there is no difference between Feature 5.1 and Trailer 5.1. A DCP 5.1 format is the same routing, regardless of where you are in the show.

              The Audio Router

              Let’s go over the Audio Router a bit to understand its pros and cons as well as how it can be configured.
              First, from a DSP standpoint, routers are “free.” They consumer essentially no DSP resources so they are very lightweight from a design standpoint. A DSP processor isn’t actually connecting wires up to make routes. It is making a list of “nodes.” When you add a “Signal Name” you are just naming that node. Internally, it was tracking that node before you graced it with a name. All a router does is connect nodes together. There is almost no DSP processing in use there.
              As an example. Here is the Check Design of a BLANK design with a Core 110f.

              Picture4.jpg

              So, it costs 2% of the DSP’s processing to have nothing in the design at all.

              [Blog 8, Page 1 of 3]

              Last edited by Steve Guttag; 02-23-2025, 06:34 PM.

              Comment


              • Let’s add in a router (24x8, just like in our design):

                Picture5.jpg

                No change, still just 2% of the processing is consumed. Okay, let’s add a bunch of routers. Say 10 of them for a 10-plex. That’s 240 inputs with 80 outputs:

                Picture6.jpg

                All of that and we just kicked it up to 3% (just 1% more). You can be pretty free and loose with routers in your design and not have to budget things. So, if you are doing a more complex design and are running out of DSP resources, perhaps using routers where you are using matrix mixers or other switching methods might lower your DSP impact.
                They cannot mix inputs but they do have mute buttons on their outputs, so you can control the outputs some.
                While you cannot perform crossfade, directly, from one input to another, you could “fake” it by using a “Gain” on the output with a max of 0 and a min of -100, combined with some logic and, perhaps, an LFO (Low Frequency Oscillator). You would need to ramp the level down, make the switch, ramp the level up.

                Router Properties

                There is another way you can configure a Router that may make your design easier to conceptualize, expand, and even control. Router Properties are over in the Right-Side Pane (RSP). You can control if it is a Source Router or Destination Router (sometimes it makes more sense to select Destination but for a format selector, we’ll want to stick with Sources). However, rather than having all mono inputs and outputs, we can define it as a multichannel router with 8-channels.
                In this design, we have 6 or so sources and just 1 destination. So, let’s make a 6x1 router. However, since they are 8-channels, that will appear as a 48x8. But remember, routers don’t consume hardly any DSP. We remain at 2% DSP. But look at how it simplifies things:

                Picture7.jpg

                Just 6 buttons. This makes it easier to control. The downsides to this sort of router are:
                • You can’t mute channel-by-channel (rarely needed on an input, but common for things like a Pink Noise tester).
                • If the source is not your designated size, you will either have empty input pins or you will need to duplicate your signals if you want, for example, a mono surround source to play on all four surround channels.
                How might the design change if we were to switch to an 8-channel router?

                Picture8.jpg

                Signal Names/Signal Snakes could clean it up a bit. I also just placed the Mics on the Side-Surrounds (and, as you’ll find out, in a later blog, I wouldn’t even put the microphones through the router, as a source). The “Pre-Show” source that goes through the Active Matrix Surround Decoder needs to have its mono surrounds duplicated and, likely we’ll need to address a balance issue of mono surrounds feeding four zones.

                For the Pink Noise, I added a 1x8 router back in so we could select what channel(s) we wanted active during pink noise testing. This is a feature I’m sure we’ll find was part of the original design.

                What if a new gizmo is installed at the theatre and we need to add another input? This sort of router makes it super-simple by just adding the extra input so it is a 7x1 router. It scales very well (audio path wise). However, if you end up with a lot of sources, the physical size of the component will get pretty big with each source adding another 8 pins. Using Signal Snakes and a Container (or a dedicated schematic page) you can keep the main schematic clean. There is also nothing to stop you from merely giving yourself more space by making the whole design bigger.

                This was just an example; I’m not going to keep the change since we’ll need the original design when we get to the control portion.

                The Matrix Mixer

                The other popular method of source selection is the Matrix Mixer. The Matrix mixer has everything the router has, plus the ability to mix, adjust levels, label the inputs, break larger matrixes up into smaller groups. Crossfades are also possible with Matrix Mixers.
                The chief downside to a mixer is that they do consume more DSP resources. Unlike routers, mixers actually have active things in them. They can change the level of the signal and they have MANY such controls, inherently in them.

                Picture9.jpg

                So, as a reminder, a blank design, with a Core 110f consumes 2% of the DSP resources. Now, if we add just a single 24x8 Matrix Mixer it is still 2% so it isn’t too big a hit. However, again, let’s say we are going to have 10 of them for our 10-plex. We’re up to 7% of our DSP compared to just 3% for 10 Audio Routers. That is a 4% increase in resources to support a quantity of matrix mixers. Mind you, percentages are relative. You can’t use a Core 110f on a 10-plex. You could use a Core 510i. In that case, 10 Matrix mixers would still only have 2% of the Core 510i’s DSP resources (versus 1% for a blank design).
                So, using a couple of modestly sized Matrix Mixers in a design shouldn’t impact your DSP resources very much but they absolutely consume more than an Audio Router. You can also eliminate the separate/dedicated Surround Offset Gain, since that can be handled within the mixer.

                Input Side vs Output Side of the Router/Mixer

                Here is a concept you should keep in mind when you are designing your system. Regardless if it is a Router or Matrix Mixer, if the component is on the left (input) side, it will apply to just the one source. If it is on the right (output) side, the component applies to all sources. Confused? Let me explain a bit more and demonstrate the differences.
                Looking at the Sample 7.1 design, look at the “Master Delay.” It is on the right-side of the Routing component. That means the delay applies to ALL inputs. Is this what you really want? Do DCPs (which typically need 83ms to more than 200ms of delay due to the video processing for good lip-sync) have the same sort of video latency (delay) as, say HDMI sources? Do you want to add delay to your microphones so the presenter gets extra reverb, if not echo? Probably not.
                Now, we can certainly work with the “Master Delay” on the right side of the Router but it has more work involved. We would need to either expose its delay pin and then have a Selector with the different values for the various formats configured so, as we change sources, we change the delay (and possibly bypass it for Microphones) and/or include it in our snapshot so it updates with the format selection. If we did it that way, then you’d have to be very careful setting the lip sync delay on installation because you would need to update the Snapshot preset without messing up any of its other settings. It would be cumbersome, but it could be done.
                Now, if we put a separate delay for each source on the left side of the router, then they only pertain to their respective input. The trick there is that HDMI shares two channels with DCP in a 7.1 system (11 and 12). We can fix that in a couple of ways by changing the wiring of the Router a bit:

                Picture10.jpg

                Note, since the ADA audio doesn’t go through the router in this design, there was no need to connect them up to it. The three different sources that are presumed to have synchronized video now have independent delays because they are on the left side of the router.
                How might I have economized a bit on the HDMI/DCP delay?

                [Blog 8, Page 2 of 3]

                Comment


                • First, make the DCP delay into a 16-channel and expose the delay pin. Expose the Enable pins for the AES9-16 and HDMI so we know which delay to use. Add in a Control Router connected to the control pins we just enabled. The Control Router will act as an A/B switch. Add in a “Custom Control” and set that to be a Text Editor (these strange components will be discussed in a future blog). The Custom Control (renamed “Delay Set”) will be the means to select what the delay values should be for the two modes.

                  Picture11.jpg

                  It’s starting to get a bit busy. This version, however, requires no changes to the Router (or snapshots) and, if we duplicate our design for another auditorium, we only have to add those two wires between the DCIO-H and the Control Router. There are always multiple ways of attacking a problem!

                  Surround Delay

                  Speaking of problems, the Sample 7.1 design does not have anything for Surround Delays!

                  Why Surround Delays?

                  We need a delay on the surrounds because they are physically closer to the listener than the screen channels. We want the sound coming from the screen and the sound coming from the surrounds to arrive at the listener at approximately the same time, if they are the same sound. Of course, it cannot be perfect for everyone but, as is our custom, we tend to make the “Reference Listening Position” the point where the timing should match. The RLP is 2/3rds back from the screen and centered. So, if the theatre is 60-feet long and 40-feet wide, the delay should be ((2/3)x60)-(40/2) = 20-feet. If we use 1130ft/second as the speed of sound, we then divide the 20-feet by the speed of sound and end up with 17.9ms. This applies to discrete multi-channel audio.

                  Pro Logic Surround Delay

                  For Pro Logic audio, as used in the Active Matrix Surround Decoder, things get a little more complicated. Matrix decoders, like Pro Logic, have to also contend with “crosstalk” that will cause audio bleed into the surround channel from the other channels.

                  To mask this problem, we need some more delay to take advantage from the “Haas Effect.” This is also known as the “precedence effect.” The short form of its definition is that if we have the dominate signal (coming from the screen channels) arrive sooner/louder, we’ll just hear that sound as coming from the dominate signal. The rule of thumb is that we need, at least 20ms extra delay to mask the crosstalk in the surrounds. However, in cinema, since we don’t want people to hear the surrounds as reverb due to crosstalk, and only one person can be in the RLP, we tend to extend this some to someone that is sitting off center a bit and, therefore, closer to the surrounds. If you were to use a Dolby CP650 or CP750 sound processor, and inputted the 60-foot by 40-foot room, you’d find that the “Optical” surround delay would be 49.9ms. So, a bit more than a straight up 20ms addition.

                  I have created a Q-SYS User Component that will calculate the desired delay(s) based on the length and width of the theatre. We will probably revisit that in a later blog when we discuss control systems. It does not use any scripting to calculate its values, including imperial/metric conversions.

                  Picture12.jpg

                  So, how are we going to get the surround delays into the design and at the correct places? It’s going to be tricky due to the DCP and HDMI audio sharing channels 11 and 12. We’ll want a delay when in DCP mode but no (surround) delay on those channels for HDMI since they are the center and subwoofer channels for HDMI. If we kept my modification above for deconvolving the HDMI from the DCP inputs on the router, it would be easy but then we would have a design that wouldn’t work for those that don’t modify it.

                  It will probably be easiest to place the delay just prior to the Master Fader and have it change its value based on format. Again, if we were starting from scratch or are free to abandon the existing snapshots, I think it would be best handled on the left side of the Routing component.

                  Picture13.jpg

                  This is, pretty much, a repeat of our lip-sync delay but just for the surrounds. Since there is only one instance with the Pro Logic component, the preshow, I was able to use a Signal Name connected the Snapshot’s “Match 4” pin (4 corresponds to the Preshow format in this design) to control the Control Router. I used a “Logic Not” Control Function to switch the delay back to “Normal” for all other sources.

                  I also used Signal Names to break the signal out to the delay and added four new signal names to feed the Master Fader.

                  Picture14.jpg

                  Router Feedback

                  What is on the left-side of the router that we might get better use if it was on the right-side? Well, if we know that things on the right-side of the router will work with everything, what would happen if we moved the Active Matrix Surround Decoder over to the right-side? That would mean that it could be used for DCP 2.0 format titles as well as for things coming in from the mini-phone jack. It also means our Surround Delay could get a little easier since we could put the Pro Logic delay on the one decoder. Alternatively, we could just plop down enough Surround Decoders to cover all of our inputs but how often are you needing to decode two at once?

                  Moving the matrix might look something like this:

                  Picture15.jpg

                  I added two pins on the input side to cover the “PreShow” feed and two pins on the output side to feed the Active Matrix Surround Decoder (henceforth referred to as “Pro Logic”). I changed the Signal Names to reflect their new purpose so they all start with “PL” for Pro Logic. I also changed the color of the Signal Names so they would be easier to trace the signal flow with your eyes.

                  The Router’s matrix changes just slightly.

                  Picture16.jpg

                  We now have inputs 25 and 26 going out of 9 and 10, respectively. That sends the PreShow to the Pro Logic decoder and then, via Signal Names, the output of the Pro Logic decoder comes back into the Router on 18-21, as before, and then the decoded audio goes out to feed the proper channels.

                  If we wanted to add a “FEATURE 2.0” format, what would we need to change for that routing?

                  Picture17.jpg

                  We’d need to use inputs 1 and 2 to feed the Decoder and everything else is the same as was used for the “Preshow.” There is zero impact on your DSP resources and you just added a Pro Logic decoder to any input that might need it. It also didn’t add all that much to the complexity of the schematic page.

                  Hopefully, these examples demonstrate the differences between what you place on the left-side (input) and the right-side (output) of a router or any matrix. Additionally, the use of feedback allows you to multiply the benefits of these placements. Feedback like this could apply to audio, video, or control.

                  Note, since this modification will impact the control system of this design, I won’t keep it in there for future blogs so those of you that are just using the standard template can continue to do so and everything should work identically.

                  Conclusions

                  This blog was about one piece of the puzzle (the format selector for the cinema processor). However, it is a critical piece. And how you set yourself up with it will impact how your design functions. It also could determine how well it scales (gets added to) and even how much work you have to do, going forward.

                  At the end of the day, every cinema processor you’ve ever worked with has had some form of router, (even if it is just a multi-gang switch or a collection of relays) at its heart. The router choses what input(s) to look at. It decides what magical stuff to send that/those signals through so that they may come out the other end.

                  It isn’t all that complicated. You also do not have to limit yourself to just the one router. You can stack routers/mixers on the inputs and outputs, if that suits your design. The same left-side/right-side rules apply. You should want to strike the balance of a flexible design while being thrifty on complication and resources. Experimentation can be essential to figuring out what will work for your specific needs.

                  We also have to consider our delays to ensure good lip-sync with the picture and good surround matching for the various formats.

                  ©2025 by Steve Guttag
                  [Blog 8, Page 3 of 3, End of Blog 8]
                  Last edited by Steve Guttag; 02-24-2025, 02:17 AM.

                  Comment


                  • Q-SYS For Cinema Blog-9

                    QDS–Part-5, Sample 7.1 Part-4, Audio Flow Part-3, Processing Part-2


                    3/1/25


                    Current QDS Versions: 9.13.0 and 9.4.8 LTS.
                    Sample 7.1 design version: 4.2.0.0
                    Introduction

                    In the previous Sample 7.1 Blog, I went over the Routing in the Sample 7.1 Design.
                    In this Blog, I will continue through the audio path of the Sample 7.1 design and move past the Routing component. If you don’t have the Sample 7.1 Design on your computer, you should check out my previous blogs.
                    This is really a continuation of Blog 8 and could easily be Blog-8 part-2.
                    And now for the obligatory disclaimers/disclosures:

                    Disclaimer

                    If any of the content in this blog happens to show up in a Q-SYS exam, it is not my intention to provide an answer-sheet beyond the discussion of good practice. I have not seen any form of the cinema final exam (my Level-1 was before there was a cinema version).

                    Disclosure

                    I do not, in any way, work for QSC/Q-SYS. These thoughts are my own based on my own interactions with the product(s) and implementing Q-SYS within actual cinema environments. I do work for a dealer that has sold QSC products since the 1980s, including Q-SYS and its predecessors. For the purposes of this blog, I represent only myself and not my employer(s) or any other company.
                    Q-SYS Level-1

                    There is a presumption that you have either:
                    • Gone through the Q-SYS Level-1 online course.
                    • Are currently taking the Q-SYS Level-1 online course.
                    It would be impractical for me, in a blog form, to cover all of the minutia (with respect to working within QDS), that are already covered, very well, in the Level-1 videos.
                    Link for Q-SYS Level-1 Cinema training

                    Processing Continued

                    Okay, I’m going to pick up where I left off in Blog-8 and continue our journey from the Router component and continue through “Processing” so we get the signal ready for the tuning and the amplifiers. I’m going to use the modified version of the Sample 7.1 design (for the most part) that we’ve been working on in the earlier blogs but everything will work with the standard 7.1 template design too.
                    The big change from the unmodified design is that I moved the delay components to the left side (input) of the Routing component so that we could apply delays based on each input rather have a global delay that would need to be modified for each input.
                    I also added in surround delays for both discrete and pro logic based audio.

                    Image1.jpg

                    So, what is next? The Master Fader. The Master Fader is a “gain” component and is nothing special. They’ve exposed the Gain and Mute pins and we have Signal Names on them. This should be expected since multiple things will want to change the fader or mute condition. Can you think of any? How about the DCIO? How about the DCP server? There could be UCI controls for those two as well. There are going to be Signal Names on the inputs and outputs since everything that controls the fader will want to be updated about any changes done by “other” devices. I will go over this more when we come through the design and look at its control provisions.
                    Interestingly (to me, at least) the ADA channels do not interact with the main fader. While you probably do not want the VI-N channel to respond to the fader, you probably would want the Hearing Assistance channel to respond to the fader. If the movie is too loud/quiet, you probably want that reflected in the Hearing Assistance. But that is a matter of opinion, I suppose.
                    We’ll also want to keep the lack of ADA signals going through the Master Fader in mind since it means that they will not respond to a “Mute,” unless they account for it elsewhere (we’ll find out). If you mute the system for any reason, you don’t want audio to continue going to people’s headsets, do you?
                    Before moving further along, let’s look down at the bottom portion of the Processing group. All of outputs of the Routing component (and some from the Inputs) are down there, so they are next in the audio path.

                    Assistive Audio

                    Image2.jpg

                    The LCR mixer generates center-weighted hearing assistance from any source that goes through the Router. Let’s open it up:

                    Image3.jpg

                    So, it reduces Left and Right by 3dB which allows dialog to be more dominate in the mix. Let’s keep this in mind when we talk about the Microphone, in a bit.

                    The “HI ROUTER” lets us choose to use either a pre-recorded HI track on the DCP or to use our own mixer. If you choose to use the DCP’s track, know that your HI feed will not work with any other format (non-DCP) and only for those DCPs that actually have a pre-recorded HI track (most, if not all Hollywood movies, now, but not guaranteed for other parts of the world or independent productions). Using an LCR mix will work for most inputs, except, possibly, the microphone.

                    Moving down we get to the output gains and delays for HI and VI. It is much handier to set your levels here than to have to go to the physical transmitter/emitter (that may be in the auditorium) and change it there.

                    I would think that the delays they are showing would be rarely used. If the lip-sync is correct for the seating area, you likely don’t need more, unless your room is very big. Generally, lip-sync should be set for the first row in the theatre. People are adept at handling when sound lags what they see (slightly). If you are talking with someone that is 10-20 feet away, they don’t appear out of sync despite it taking 20ms for the sound to reach your ears. However, if sound precedes what one sees, that is very apparent as it does not occur naturally. Since the person wearing headphones has essentially zero delay, if you have a large theatre where your lip sync may be a little shorter to compensate for a room that may be over 100-feet in length, this could cause the lack of delay in one’s headphones to be apparent.

                    There is no lip-sync issue with descriptive audio in Descriptive Audio so I would think the delay they are providing is superfluous.

                    Meters
                    Image4.jpg

                    The meters section is fed from the output of the Routing Component and, in the case of the AES7-A1 and AES8-A2, the Inputs. I’m sure we’ll see these meters show up in the UCI when we get to that in a later blog.

                    The “Amplifier Outputs” is a container that hold power meters from the amplifiers.

                    Image5.jpg

                    A problem with locating these meters here, in the Processing group, is that if you were to change your amplifiers (change make/model…not a swap for a repair), some meters might become missing. Amplifiers are Left-Side Pane (LSP) components…so they do not copy/paste between designs. You will always have to rebuild these meters as you duplicate/change the design. I would tend to locate them, if they show in the schematic at all, near the amplifier group since that is where you would also be changing the amplifiers and need to update the links to the meters.

                    Normally, I only place the power meters in a UCI for troubleshooting while you are listening to the amplifier output(s) on the booth monitor.

                    [Blog 9, Page 1 of 4]

                    Comment


                    • Booth Monitor
                      Image6.jpg

                      This is a 10x1 router. We’ll have to see later to see how this implementation works. However, at first glance, there doesn’t appear to be a provision for mixing multiple channels (e.g. Left, Center, Right) into the monitor. It is just a single channel.

                      The “Loudspeaker Listen Buttons” component is a container that houses the “Listen” buttons from the amplifiers (so, those will not survive a copy/paste to a new design for your next auditorium). While they have not included the discrete HF/LF channels, there is nothing to stop you from adding those buttons, if you so desire.

                      Whatever “Listen” button you select to hear is what is on an amplifier output. It will show up on input 10 of the Booth Monitor Router.

                      Microphone Input(s).

                      Moving back to the main part of the output of the Routing component… If you are on the stock Sample 7.1 Design, the next component is the “Master Delay.” If you modified your design to match what I have done, then you are heading to the “Bypass Mixer” and “Surround Offset Gain.”

                      Image7.jpg Image8.jpg

                      I want to interrupt the signal path a bit here (and after the delay(s)).

                      Traditionally, in cinemas, processors that have had a microphone input have created a separate format for it. In the film era, this worked as there would rarely be a time where you would need both the film sound as well as the microphone. However, in 2025 (and for quite some time now), the odds that you will need a live microphone while say a laptop or other video for Power Point or Zoom presentation is quite high.

                      Additionally, the volume needs of the microphone(s) are bound to be different than that of other content. As such, we’ll want to have separate volume controls too.
                      So, what can we do to get the microphone into the signal path such that we can activate it, even while content is running?

                      We’re going to want to inject the microphone into the signal path after any/all delays and not be subject to any movie formatting. That would place it here after the Master Fader:

                      Image9.jpg

                      Note, if you have the unmodified Sample 7.1 design, the location is the same except it is after the Master Delay.

                      Image10.jpg

                      Let’s look inside:

                      Image11.jpg

                      The mixer is, pretty much, a pass-through. I’ve only added the microphones to the side-surrounds. As such, I could have used a smaller mixer and just placed it on the surrounds. However, there are those that want the microphone to come from the screen channels (higher chance of feedback since your microphones will be pointing at the speakers). I only use the side surrounds because the rear surrounds face the presenter and can create a disorienting echo, depending on how loud they are, as the presenter will hear themselves delayed by the length of the theatre.


                      I kept the microphones behind the BYPASS MIXER because, if you are using the screen channels, you will want the bypass to also work for the microphones too.


                      I’ve pulled the fader and mute button out onto the schematic for ease of use. These same controls can be placed into the UCI as well.


                      We could have used a dedicated Gain component just for the microphone. This could simplify some things. It makes sending the microphone signal to the ADA outputs less complex. Speaking of which, we will want to add the Microphones to the Hearing Assistance path. And, while we’re at it, add it to the booth monitor path and meters too.

                      Image12.jpg

                      Another thing you could do, provide a little mini mixer where you can decide to use either the Center (and include, or not, Left and Right, it’s your choice, as always) or the Surrounds or a mix of both. With a couple of mouse clicks and you have an instant mixer:

                      Image13.jpg

                      Since you will want both side surround channels to stay at the same relative levels, I only brought one level out of the matrix mixer and then exposed the control pins to allow me to link them together. I have left the mute pin for the microphone input available in case we want to tie that into an “All-Mute” later.

                      In case you haven’t thought about it yet, take a moment to realize what we were able to do with just a couple of mouse-clicks. We redid part of the audio signal path to suit our needs. Since your needs may differ, you can modify other parts of the signal path as your needs so dictate. Purpose built (“canned”) processors can’t accomplish this. They cannot change as ones needs change. They are stuck with the hardware design and the parameters they were designed to. And, even if the manufacturer were to agree with your design change request, how long do you think it would take before the change showed up in the product’s software? With Q-SYS, you can make that change in minutes. Q-SYS can be incredibly flexible for cinema.

                      Muting ADA Audio and ADA Audio Level

                      You likely do not want your ADA audio (either one) playing when you have muted the sound, including in a fire alarm situation.

                      You also will, likely, want the Hearing Assistance audio to track, somewhat, with the main fader so the level is increased/decreased with louder/quieter content. Descriptive audio does not need/want such level adjustments. Furthermore, you probably do NOT want the microphone(s) to either mute or track with the main fader as their level is set separately and may be used in emergencies.

                      Making these modifications should be easy enough.

                      Image14.jpg

                      I exposed the gain and mute pins on the input of just input 1 of the HI Mixer we added. This will prevent either gain or mute from affecting a microphone feed. The microphone level and mute should be kept independent.

                      For VI, I exposed the mute pin of the VI GAIN and let the same MASTER MUTE signal control it as well has the HI feed.

                      I also adjusted the group box and some spacing to it all fit (it is very common to have to graphically touch up designs as they evolve).

                      [Blog 9, Page 2 of 4]

                      Comment


                      • Surround Offset

                        Surround Level Background

                        Why do we need a surround offset? Well, because we have a long legacy with surrounds to contend with and it goes all of the way back into the 1950s. Up until the late 1970s, setting aside fringe formats, we only had mono surrounds. And, naturally, when surround levels were developed, they were matched to the screen channels. So, if you play a reference signal to say the Center speaker and it shows 85dBc on your meter, if you play the same signal to the surround speakers, they too should show 85dBc on your meter.

                        When what we now call “5.1” was in its beginning stages, it needed to be compatible with the decades of history of monaural surrounds that came before it. Stereo Surrounds were not widely adopted until digital audio hit in the early 1990s. Only select 70mm releases from the late 1970s (Superman was the test feature and Apocalypse Now was the first release feature) until the early 1990s received a “split-surround” mix. ALL 70mm releases still had to have a full mono surround mix for compatibility.

                        The solution for stereo-surround systems to set each half of their surround system to 82dBc using reference pink noise such that they would acoustically sum to 85dBc if both surround channels were playing. Thus, regardless if you play a mono-surround version or a stereo-surround version, the level in the theatre would be the same.

                        When digital audio came to cinemas in the early 1990s (Dolby Digital, DTS and SDDS from Sony) all followed suit that Left Surround and Right Surround were tuned to 82dBc so that the entirety of the surround channel would acoustically sum to 85dBc.

                        As we probably should have done in 1999, I’m going to skip over Surround-EX (3-channels of surround) but suffice to say, it too would need to follow the same levels as what came before it.

                        In 2010 (Toy Story 3), we got Surround 7.1 in digital cinema. For reasons that still have me scratching my head, the answer was to set all four surround channels to 82dBc. I guess people were used to setting stereo-surrounds to 82dBc so why confuse them?

                        What all of this means is, it is up to the sound processor to adjust the surround levels based on the channel configuration. If you are playing a 7.1 feature and you tune to 82dBc at reference, you are good to go. If you are now playing a 5.1 feature, you now have Lss combining with Lrs and likewise Rss combining with Rrs. Uh-oh. That would have the Left Side playing around 85dBc and the Right side also playing at about 85dBc for a total of 88dBc when the Left and Right acoustically sum in the theatre. You’re 3dB too hot. And that is reason for the Surround Offset.

                        Consumer Surrounds
                        The consumer world did not have the same legacy as commercial cinemas and have taken a different approach. Every channel is set the same (with the possible exception of the subwoofer/LFE channel). So, if your reference signal reads 85dBc on Center, it should on each of the surround channels too. But there is a catch. Consumer formats don’t reconfigure the surrounds based on 5.1 and 7.1. If you have a 7.1 home system and are playing 5.1 audio, the back surrounds do not play (unless you have an upmixer to “fake” it). So, if in a commercial cinema you take the traditional cinema approach of mixing the Left Side into the Left Rear and likewise for the Right Side into the Right Rear, you will have a 3dB offset to contend with.

                        You also have to allow for that you’ve tuned your room with surrounds at just 82dBc/channel and should boost them by 3dB while running consumer video to match their intended levels. So, it is entirely possible that you will find yourself boosting the HDMI surround levels by 3dB only to have to reduce them back down by 3dB when playing 5.1 content! It all depends on how involved you want to get into playing consumer-based content.
                        So, if you’ve read all of that, we need some means of adjusting the surround levels based on the channel configuration of the content (and possibly the source of the content). The Sample 7.1 Design has used a 4-channel Gain component and set it to -3dB and then included it into the Format Snapshot bank. They either set it to 0dB (do not apply the offset) or apply the -3dB offset on a format-by-format basis.

                        Bypass Mixer

                        The bypass mixer is a handy inclusion in all cinema processor designs. Its purpose is to get one out of trouble quickly. If you have an amplifier or speaker failure on your screen channels, you can quickly route around the problem. Let’s look inside:

                        Image15.jpg

                        My guess is that the Bypass mixer is part of the Level-1 testing since its implementation feels incomplete. For example, the input labels are still at their defaults. There are other indications but, just in case, I’ll leave those for you to discover (and overcome).

                        In its normal mode, the signals are all able to pass through without any changes. In one of its bypass modes, things change a bit:

                        Image16.jpg

                        This is clearly a center bypass. Center (aka “3”) is being routed to Left and Right but at a -3dB level and the Center output is being muted (this ensures that if the speaker is damaged and now is making rattle/buzzes it will keep quiet).

                        At first glance, this might seem like a decent mix but it isn’t the approach I would take. First off, you are outputting more than 100% of your normal output, per channel.

                        I don’t want this to be too much like a math class so I’ll keep the numbers simple. Let’s say you’ve determined that you need 200-watts to drive your Left speaker and have chosen the amplifier and speaker based on that. So, if you play something at 0dB (theoretical maximum) you, potentially, are consuming all 200-watts. Then, you add in a Center channel at -3dB. That means you are asking the speaker and amplifier to play at 583-watts (when you mix voltages if you sum two identical values, you go up by 6dB however, in this case, one value is 3dB lower or .707 of Left (or Right). If you do the math, you’ll find that it comes to 4.64dB. Is your amplifier and speaker sufficiently specified to withstand that possibility?

                        The same issue applies if you are merely outputting an analog or AES67 or Dante signal. If you’ve set your gain structure such that the audio leaves Q-SYS to fit within the various output capabilities of the components, did you leave yourself enough headroom to not clip the analog (or digital) outputs?

                        The math isn’t enjoyable for most as dB notation means you are working with logarithms and are constantly working with negative numbers. And, to add to the confusion, when one sums voltage (like in a mixer), it is at a steeper rate than power (speakers acoustically summing).

                        If you have an actual Core (audio cannot work in emulation), you can let Q-SYS do the “math” for you. Set up a Sine wave generator feeding your mixer (at 0dB) and have a meter on the mixer’s output.

                        Image17.jpg

                        We’re at 4.65dB (close enough to our predicted level for you to believe Q-SYS’ meters, or my math)! If you are running out of headroom on your outputs, you can always lower the outputs of the Bypass Mixer by 4.65dB when recalling this preset:

                        Image18.jpg


                        [Blog 9, Page 3 of 4]

                        Comment


                        • You are not going to clip the signal within Q-SYS. It isn’t until you need to get it outside of the Q-SYS’ ecosystem that clipping becomes a concern. We probably don’t want to take that big of a volume hit ~5dB when switching into “Bypass.” But you do need to figure how the channels are going to sum.

                          Key Tip: It is important to realize is that Center channel is more important than any of the others. It contains the majority (and in many cases, ALL) of the dialog. You should let it get the bulk of the available power and minimize the effects of trying to “phantom” the Center speaker using Left and Right. I would start the Center channel at -3dB in both Left and Right and then reduce Left and Right until the resulting level fits within our gain structure and, at no point would I let Left or Right be set higher than ‑6dB. The dialog shouldn’t “compete” with music and effects on the same speaker.

                          Image19.jpg

                          If you can spare the 1.65dB in your system, this will result in the dialog level being approximately the same as when you are not in bypass and making the most of a bad situation at the mere press of a “button” on the UCI. If you need to reduce the level to not clip anything, take it off of left and right output. The realities are, you probably are not running a movie where all three screen speakers are being asked to play at 0dBFS…which would translate into nearly 110dB in the theatre from just those three speakers (presuming you are set to 7.0 or 0dB on your fader)!

                          The other distinct possibility is that you lose Left or Right and need to feed them into Center. Again, you will want Center to remain the dominate sound coming from that speaker.

                          I would lower Center by 3dB and set Left and Right at -12dB. More often than not, Left and Right carry music and effects and will be playing at approximately the same level as each other. By setting them to ‑12dB, they will sum to -6dB, which is 3dB below Center (we set it to -3dB).

                          Image20.jpg

                          You likely have 1.66dB or so to spare and if not, simply lower the output of the mixer by 1.66dB.

                          The downsides to this strategy are that if you are running one of the few features where dialog is “panned” across the screen, if a character is at the left or right edge of the screen, they are going to be 9dB down, relative to center and that is very significant. It isn’t as big a deal for effects. If a car drives across the screen, it will sound completely natural for it to be louder in the center of the screen when the only sound source is coming from the center of the screen.

                          There isn’t any, all-encompassing “right-way.” There are just better and worse ways, depending on the content and the system’s capabilities. If your system has the headroom, by all means set Center to 0dB and Left and Right to -3dB.

                          Remember, this is a temporary/emergency condition that should be fixed and not something you should need to live with.

                          Key Concept: Your Bypass system could be for nothing if you put the Center channel (any part of it) on the same amplifier(s) as Left and/or Right. Always keep Center off of Left and Right amplifier(s) since those channels are expected back each other up.

                          Conclusions

                          As I stated at the onset, the “Processing” section is where all of the complexity is, audio wise. That is where your design parameters should start, to ensure that your system can accommodate the system’s needs. I apologize for how long this (and the previous) blog is but I didn’t want to break it up too much as there is a flow to the signal and logical groupings.

                          You should notice that modifications, to suit your needs, are relatively easy to accomplish. All of the modifications that I have done, thus far, will not impact any part of the control system, as supplied with the Sample 7.1 Design. So, if you are just working with the stock Sample 7.1 Design, you are still good-to-go.

                          I think the change in how the microphone(s) is handled is an important one for cinema and one that may not have been readily apparent to the newcomer since off-the-shelf cinema processors still make the microphone a separate format (note, I left that in place as well).
                          The next sections should go easier though I might be able to throw out some curveballs there too.

                          ©2025 by Steve Guttag

                          [Blog 9, Page 4 of 4, End of Blog]

                          Comment

                          Working...
                          X