Announcement

Collapse
No announcement yet.

Q-SYS Corner

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Thanks Sean. I just took a quick peek at the Luminex switches. Gigacore 12 and Gigacore 26i. Holy crap! No exaggeration, they are 10-times pricier than the likes of TP-Link and about over 2 times pricier than the QSC/Dell switches. The Luminex mentions IGMP snooping but nothing about querying. And, if they do, do they arbitrate between themselves to elect one querier. I would hope so, at that price. Do they have a "Q-SYS" mode? If so, how are they ensuring the right QoS settings for the various Q-SYS configurations (with/without Dante).

    As for configuration, once the switch is "figured out" and presuming it has the proper bandwidth and can work properly. Typically, configuration is nothing more than loading your saved configuration and updating the IPs as necessary for the particular site. Now, I get that it is likely easier to set up VLANs (properly) but considering how many switches I would have in say a 10-plex. The entire sound system would be scrapped before accepting switches of this price range. The switches far exceed the cost of the sound processing itself. In a sense, the QSC/Dell switches have the benefit over them in that they are, at least, half as much money and also are preconfigured (with just the IPs needing to be changed).

    It would be nice if QSC-cinema industry could come up with some more value priced switches that suit the needs of typical cinemas but may not have the flexibility that higher-end switches bring but cinemas just don't need.

    Comment


    • #32
      I cringe when people say "cheap" when I believe they really mean "inexpensive". In my mind there is a huge difference. But, Steve, I like your term "value priced". The adage "You get what you pay for" always comes to mind although that isn't a strict rule as you can find great values if you are careful but you can also get stung by a high priced but not so perfect item. I struggle with product pricing. I think I've been pretty transparent about that in the past.

      I once sat with a CEO of a (now defunct) company who produced a low-cost item. It was one of the first hand-held thesaurus. Okay, so this was in the days preceding the Internet. I believe this sold for <$20. At that time my company produced a product in the $3000 range. So he sold 100's of thousands of his little device and in the end wasn't making any money. I sold 100's of our workstations and in the end wasn't making any money. We both lamented about some sweet spot with that mid-priced and high-volume product with a comfortable margin and enough profitability to actually grow a company.

      I still haven't found that item. It sure doesn't help when the whole world goes in the toilet with a global pandemic and our government can't help but destabilize everything and make things worse.

      We treasure our Netgear "hubs" for their debugging value but have also been avoiding their switches for routine use. I'm not sure if those that we've used have power supply issues or what? They seemed to run hot which I thought led to lock-ups or, more likely, reboots and the data loss. But like Sean, we just moved to another. That said, we don't supply switches and have to price them into systems. We're just end users for those. Still beats 10base2 coax.

      Comment


      • #33
        Bruce, no question, one often gets what they pay for. Note, Q-SYS is billed as working on standard IT infrastructure. What one gets on the pricier switches is often a a degree of granularity of control as well as specific features. Once you attain a switch that has the features you need and handles the traffic you throw at it, any excess features/cost, just because they are in the box, do not benefit anyone. With Q-SYS, there are specific QoS requirements and IGMP desirables plus bandwidth requirements. About all I don't see in the current TP-Link Jetstream series are the IGMP arbitration (when 2 or more queriers are present, one should be elected the querier until such time as it drops out, the next one in line takes over).

        Thus far, I've yet to have an audio issue running on these TP-Link switches.

        Now, also to your point, there has to be a quality level in the product that it actually works and can actually do what it claims to do. A product, regardless of type, that doesn't work, is worthless and no amount of "savings" makes it of value. I could hand you a piece of crumbled up paper out of the trash...tell you it is a network switch and it is no better or worse than a box with a power cord, RJ45 ports...etc but doesn't function. <locating wood to knock upon> I've yet to have one of these TP-Link switches crap out on me either. Oddly enough, for my standard unmanaged switches, I've historically used HP Procurves (1410G and now 1420G). I still do for non-Q-SYS. Why? They have NEVER crapped out or caused an issue. I'm moving away from Tripplite UPSes...why? They are giving me grief. If their batteries fail, they don't just go into a battery fail mode, they turn off and also don't light ANYTHING...not even the "Change BATT" LED. It appears that its fuse is blown or something. Put new batteries in, and it wakes up. However, that also means that the UPS becomes the source of equipment failure rather than just preventing it. I have been putting in automatic transfer switches for some time now (not Tripplite, by the way) to ensure a UPS isn't the cause of going down.

        I'll pay more to avoid flaky equipment. And, when it comes to something like a Q-SYS switch, I'd switch to a different unit, more expensive, in a heartbeat if the lower cost switch had ANY hiccups. I think it is just a matter of getting "enough" switch to do the job.

        Comment


        • #34
          On one major project for a 12-screen venue that included 36 Visionary Solutions E4100/D4100 HDMI over IP endpoints, we originally used Dell N1124P-ON switches as supplied and configured by QSC for the audio Q-LANs, and TP-Link T1600-28PS switches for the video (these were not my choices - I installed the system, but did not spec it).

          We found both problematic. The Dell switches had a reliability problem: one completely died within a month of installation, and another had one port go completely dead, and lost PoE in all the others. Also, fiber transceivers for them are staggeringly expensive ($400). The TP-Link switches couldn't handle the video bandwidth, because they couldn't support 10 GBPS fiber, only 1. After repeated complaints of video glitching, we decided that we had to upgrade the LAN infrastructure.

          We eventually upgraded the video LAN to Cisco SG-350X-24MP switches in the MDFs (per Visionary Solutions' recommendation), and Netgear GC728XPs in the IDFs. These Netgears weren't an option for the MDFs, as these switches had to have four SFP+ ports, and the Netgears only have two.

          The Ciscos were seriously expensive, but the Netgears very cheap for what they are ($400 ish). Furthermore, they are happy with FS generic Cisco-compatible 10 GBPS transceivers, at $25 a pop. Both are absolutely not plug and play: configuring the IGMP per Visionary Solutions specs was fiddly, but once everything was set up, it worked without any complaints for months (until the coronavirus shutdown).

          Comment


          • #35
            Separate reply, as it's to a separate point.

            Originally posted by Steve Guttag
            I'm moving away from Tripplite UPSes...why? They are giving me grief. If their batteries fail, they don't just go into a battery fail mode, they turn off and also don't light ANYTHING...not even the "Change BATT" LED. It appears that its fuse is blown or something. Put new batteries in, and it wakes up. However, that also means that the UPS becomes the source of equipment failure rather than just preventing it.
            I've had this happen too, but sometimes with more subtle symptoms. At one site, we had complaints of DSS220s spontaneously rebooting in the middle of a show. After much head scratching, it turned out to be because they were powered through a Tripp-Lite UPS with batteries that were starting to fail. We bypassed the power cords straight into the wall, and the problem went away. After replacing the batteries and reconnecting the power through the Tripp-Lites, the fault did not reappear.

            I now recommend anyone who uses one of these things to replace the batteries proactively every five years, and whenever I do so, I put a label on the front of them that reads: "Batteries replaced on [Date]. Next replacement due on [Date plus five years]."

            Comment


            • #36
              Leo, we replace batteries in UPSes every 3-5 years with a 4-year nominal. You still get the odd bad battery and the Tripplite just drops. Note, we've used both the SU and the SMART series. I prefer the SU as it is a double conversion. The SMART series, when they went to the digital display also have become unreliable in the sense I've had now multiples just end up dead. If one has to buy the unit twice in 10-years, it is cheaper to go with a more expensive but more reliable UPS. Then again, in theory, my ATS mask the problem now (unless the UPS fails and THEN they lose power).

              For a site we're doing with Q-SYS and a core driving 9-screens. There are two COREs (natch) and two UPSes (double conversion) and ATS on both so for it to go down you need to be missing some serious power and have multiple catastrophic failures.

              Onto the switches. The Cisco SG350X-24MP is about 33% less than the QSC supplied/configured Dell based 24 port switch so I would not classify the SG350X as "seriously expensive. You want to see seriously expensive, price out Sean's Luminex Gigacore 26i. Be sure to be sitting down!

              I haven't looked into it but what is the difference between the CISO SG350 and SG350X series? I have to believe bandwidth.

              Note, on the TP-Link T1600/T1500 stuff, I've only ever passed audio through. I haven't even tried QSC's Video stuff, which should take less bandwidth.

              I have not had any of the QSC/Dell switches fail but I also only have a handful in service.

              Comment


              • #37
                The Cisco switches are three times the cost of the Netgears (which, so far, have worked reliably and flawlessly), hence my "seriously expensive" remark. I wasn't responsible for buying the QSC-customized Dells, so I didn't see the figures. If they're even more expensive than the Ciscos, and you have a cost of several hundred per fiber transceiver on top of that, I am even less impressed with them than I was previously!

                The instructions for the model of Tripp-Lite we usually install recommends replacing the batteries every five years, and I haven't seen them go bad before then. The ones that were causing DSS220s powered through them to go funny were 6-7 years old at the time. Agreed completely that they shouldn't stop working when the batteries die, and that this is a major design flaw.

                Comment


                • #38
                  Another penalty of letting Lead acid batteries stay in too long is that they swell and it could be a trick to get them out. I've had some batteries croak at 3-years but yes, most, get to 5-years and beyond but we target anywhere between 3-5 years to replace (it all depends on when we are there for a maintenance call and how old the batteries are).

                  As for the QSC/Dell switches, yes they are more pricey than the CISCO SG350 or SG350X The non-X CISCOs are about 50% as much as the QSC/Dell, the "X" versions are about 67% as much as the QSC/Dell. TP-Link and Netgear, typically are in the same price range based on features. They are "Value Priced." D-Link is another value priced switch. For Q-SYS, QSC still has their setup sheet for the D-Link (specific model(s)).

                  As a heads up, on one of the boards I frequent, there have been some pretty scathing reports on the Netgear stuff with Q-SYS. Now it is entirely possible some models don't work and some do and it may be how you all set them up versus others or there may be some other mitigating circumstances (maybe running other stuff on the switch and not just Q-SYS that is triggering the problem(s)).

                  Comment


                  • #39
                    Originally posted by Steve Guttag View Post
                    So what switches are everyone using for Q-SYS?
                    Steve, I've got about 10 Atmos screens running with Aruba (HPE) 2530-48G switches (P/N J9775A) and have been pleased with the results thus far. One switch lockup among them in four years and no hardware failures (can't say the same for the core 110's or DPAQ amps...). I'm partial to HP's due to past experiences and the lifetime next day replacement guarantee.
                    Attached Files

                    Comment


                    • #40
                      Jason, did you find that you needed a 48 port switch? Would a 24 port have been sufficient? As for reliability, I haven't had a CORE go down, yet. On your DPA-Q amps, were they 4.5 or possibly a 4.3? My informal study has shown that the 4.2s don't fail nearly as often as the 4.3 and 4.5. QSC has retired them in favor of the "K" series and they have new power supply designs that are more reliable. So the 4K4 and 8K4 replace the 4.3 and 4.5 (2K4 replaces the 4.2). The 8-channel amps had the newer design to begin with though are also replaced by the 4K8 and 8K8.

                      Comment


                      • #41
                        Most of the Atmos rooms were done with the 4-channel amps--the larger ones have 20 of them. We stepped up to 48-port switches to combine management, media, and QLAN functionality into one VLAN'd switch rather than having three separate switches serve as different potential points of failure during shows. While the rooms with 8-channel amps didn't need 48 port switches, we elected to keep using them so we'd only need to have one base config, stock one model cold spare and have plenty of ports for future expansion if needed.

                        Every one of my Atmos screens with the DPA4.xQ models has had at least one amplifier failure. Compared to the DCA Series, the 4-channel DPAQs do not tolerate utility power transients well. Most of the failures have been 4.3s, but that could simply be because we have more of them than the 4.2 and 4.5 models combined. None of the failures affected a single channel--they always took out the entire amplifier. Most were the dreaded '0xD12' error code (i.e. power supply exploded--return to the factory), though sometimes simply power cycling the amp from the rear panel switch would bring it back. QSC says they've reduced the chances of the power supply failure happening in newer firmware, but I've still had these amps die as recently as 8.1.1. A couple also expired during firmware updates, including one on the upgrade from 7.2.1 to 8.x. I haven't lost a 8-channel DPAQ yet.

                        While I love the flexibility of the Q-SYS platform, the Core 110 has also proved to be problematic. The software was unstable in large Atmos rooms below Q-SYS 7.2.1--doubly so for the rooms with redundant Cores. Then there's the game of whack-a-mole whenever upgrading to a new version of Q-SYS to see what the software engineers broke that wasn't caught in testing and declared in the release notes. On the hardware failures--I've had three different 110's croak in production under warranty (two of them went back for repair, then experienced the original failure again in the field a short time later). The ones that failed were all on UPSs, too. For screens that don't have redundant Cores, we now stock a cold spare regionally due to the reliability concerns with the production units and the turnaround time to get one repaired and back from Costa Mesa should one go down.
                        Last edited by Jason Raftery; 06-08-2020, 10:44 AM.

                        Comment


                        • #42
                          The only 7.x firmware I've trusted was 7.1.2. I had problems with 7.2.x and again, zero failures, thus far. I have no CORE110 in redundant mode yet but I have a three plex that is going to be done that way however not ATMOS.

                          I have a pair of CORE510c in redundant mode for ATMOS, also running 7.1.2 and zero issues. I have no 4.3 or 4.5s in the field. In fact, I don't have any 4.2s in the field but I have one at home (and a CORE110c that hasn't failed either. It is always on the latest firmware...8.3.1, at the moment).

                          You are bolder than I on combining QLAN, control LAN and MEDIA on either a single VLAN or even on a single switch. I break them out. Switches are cheap, comparatively speaking. Then again, I spend less on three switches than you do on 1! With failure points, by having more things, you do increase your risk of failure...however with more things you also reduce a single failure's impact. So, if one of the QLAN switches dies, I don't go off screen. And now that servers are in the projector, I could withstand a control LAN going down too and the server/projector will continue though it would be more manual for the sound/lights...etc. I've had REAL good luck with HP Procurve switches (unmanaged)...first the 1410G series and then the 1420G series. Zero failures. I used to use SMC before they got bought out by LG. They have done well for me (16 port and 8 port varieties) but they are now heading on 10-years old in some locations.

                          Comment


                          • #43
                            I haven't seen a Q-Sys core failure yet, but the amount of Q-Sys systems we're involved with are limited and none has yet been upgraded to 7.2.x. We've got a bunch of 110f's in our screening room that haven't gotten the love they should. They'll will be doing Atmos one day, but just for a single room with a limited number of channels, but despite Coronavirus lockdowns, I haven't found the time to implement it.

                            Having done datacenter stuff in "past lives", I've had my fair share of experiences with switches all over the board. I've had pretty solid experiences with HP/HPE/Aruba (you gotta love all those take-overs, splits and name-rotations...), but primarily with the gear that was actually designed by them. The lower-end switches, like the 14xx series, 18xx series are off-the-shelf OEM products (mostly Accton) you can also buy with a Dell or D-Link logo and countless of other logos. I have had mixed experiences with the 18xx series, which were often used as cheap, manageable switches for offices. Those experiences were not so much of switches crashing, but PSUs failing after powercycles.

                            We now tend to centralize stuff on a bunch of Juniper switches, EX or QFX. I don't like fancy SDN stuff, maybe it makes sense if you run a network with 3000 network devices, I tend to do "old-school" static configurations on the devices themselves. Juniper switches can also act as routers and they allow you to cluster them into "Virtual Chassis", which is a bit more advanced than your average switch stack. The devices remain independent of each other and monitor each other. It also supports nice stuff like LACP bundles across switches, where you can hook up a machine with e.g. 2 or 4 GigE ports (or even 10GE ports) and have the benefit of full bandwidth when everything is working but still half of it when one of the switches fails.

                            In those configurations, I generally tend to centralize everything I can on those switches, all the different LANs, even storage backends. Although the switches are generally more expensive than your basic manageable switch, if you really use all the features they're offering and you centralize everything on them, it makes sense. Obviously, centralization also leads to increased dependency on a limited set of devices, but we're talking Q-Sys here, so that's the end-game anyways.

                            Since those Q-Sys Cores are essentially just Linux boxes running on Intel CPUs, we asked QSC if they may be willing to offer their cores as virtual appliances for e.g. VMWare, but until now they seem to be reluctant to do so. One of the problems of virtual machine clusters is obviously the real-time aspect of it, but having a Q-Sys core as a VM-deployable appliance could be great for staging and lab environments.

                            Comment


                            • #44
                              Its funny you say that Marcel because QSC has publicly stated that they see their future as a "software company" and that their goal is to develop Q-Sys to where it is running on standard off the shelf intel servers and you just buy the license(s) from Q-Sys. This is supposedly the idea behind licensing for LUA scripting and UCI's. My guess is that they are just not there yet either as a business model or with the hardware/software configuration.

                              Comment


                              • #45
                                Perhaps Sean, but the larger "CORES" are just Dell servers (e.g. CORE 5200) running the Q-SYS software, no?

                                Comment

                                Working...
                                X