Announcement

Collapse
No announcement yet.

Projecion network

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Projecion network

    Hi,

    Traditionally we have set up the projection network using a Sonicwall or Fortinet router to separate it from the theater network.

    I'm considering instead of using the router just assigning a VLAN to the projection network, any downside to doing this?

    Thanks

  • #2
    All the theaters I did installs in have two seperate networks. Management and Media. Media is in no way connected to the management network which is the only firewalled network. Most have been running 8 to 12 or more years and there have been no hacking issues. The firewall automatically alerts if someone attempts to penetrate. Mostly I used a software based firewall called Untangle. It was bought up by another company and only a free trial is available. But it's a top notch firewall solution.

    Comment


    • #3
      ola Gabriel aqui no brasil , eu uso um swicht com vlan separando a rede da projeção com a do teatro e nao tenho problemas

      Comment


      • #4
        In general, I don't bother with a firewall between Front-of-house/internet and projection. The TMS is typically on both, and that's all that is needed. The projection network equipment does not need to access the internet, and if they do, it can be set up temporarily for an upgrade etc. There is no need to have a default route on the projection network unless you have an external integrator. If so, they typically may install a router and you route via it so they can talk directly to all projection equipment for telemetry retrieval.

        If you do want to access, for example, the player's web interfaces directly from any front-of-house computer, you will need to route it. Either by a firewall (pfsense/opensense are free and do all you need) or simply turning on routing in the kernel (Windows or Linux) of the TMS and use that as the router. Windows/Linux also have firewall capabilities if required. Workstations that need to access the players typically are given the persistent route needed individually, as an easy way to limit what stations can talk to the players. Or use the firewall rules.

        But in general, the simplest effort/easiest path is taken.

        management and media network is also, in general, a waste of time and just more complexity in a network. It comes from the time smart switches were expensive and many purchased only 100mbit switches for that, and 1Gbit for media. 1Gbit is standard, and 10Gbit is now quite common. There is no bandwidth issues with sharing the network. (The internet shares 100,000 of apps all at the same time and works fine, so will this) Even so, we typically do VLAN of a group of ports for media-only traffic as, it's not that big a deal and integrators are used to it being that way, even if they don't understand the technical reasons why.

        Security-wise, I am not a fan of going all out on security on the projection network. It is very expensive running costs wise. You need an integrator that knows security, and I have not seen one that knows it well. Its not a core talent of an integrator and they cannot afford those types of staff at the cost levels they can afford or can extract from the cinema. If a hacker did get in, there is no benefit in getting access to the encrypted FIPS security-compliant DCP assets. Sure they could screw up a show, but there is no benefit for them to do that.

        It is the POS system that they will go after for transaction information. Credit cards etc. And I don't see anyone going on how cinemas don't do that well.
        I hear a lot, "Oh the projection network has to be secure..." but never "The POS network has to be secure" Typically POS systems are on the most insecure network of all, the front-of-house network with desktops and other administration workstations susceptible to internet hacking.

        From my perspective, a lot of these "imperatives" forces on to cinemas is more about extracting profits from cinemas than doing what is needed to suit the job and security issues they have.

        I know some here will very much disagree, but I fall under a cinema owner, not an integrator. I see it from a Cinemas perspective, but have a deep, developer-level understanding of how it all works.

        Comment


        • #5
          For the sites I support, I do put a firewall in as, among other things, a demarcation line between the equipment I'm responsible for and what other IT needs may be present in the theatre. I also don't want the booth equipment to be on any network that involves money transactions. I also don't want to be under the thumb or policies of the theatre's IT department (which, for me, may not be a cinema). I, typically, have multiple LANs in the booth, management and media plus if QSYS or AV over IT, that will add a couple more and possibly a DMZ for rentals though that is mostly going away in favor of the client supplied public WiFi.

          By far, the most prevalent system I use does have two switches at the TMS/LMS area with one just handling communication/data while the other is just handling media. The Media network has no gateway and the only way on it is via physical connection. The router will allow an FTP transfer of POS schedules to the TMS and that is about it. If the client needs/wants connection to the booth network from the manager's office or other device, we can set that up too either via routing or WiF or even wired connection.

          I tend to use unmanaged 1G switches in the pedestals (or soundrack) as I don't see the big benefit of a managed one in the scheme above. There are some advantages for troubleshooting but not significant ones. Running cheaper unmanaged switches means that if one were to fail, the client can to any store that sells switches, that day, and be up and running on that screen again without any configuration...just plug it in.

          If I have QSYS for sound, depending on the size of the complex (size o the QSYS CORE and how many LAN ports it has), there will be a pathway via the router for QSYS to communicate with the equipment since servers and automations will need to talk with their sound system as well as QSYS is an automation too. I always run dual QLAN networks to ensure zero down time (and dual cores). QSYS is also getting into video so the bandwidth requirements can be significant but mostly I protect the sound and ensure there are no hiccups, regardless of traffic on the networks. People will tolerate laggy controls, even the occasional pixelated picture LONG before they put up with even a minor sound defect. Everyone is an expert on sound, it would seem.

          My philosophy on this is that CAT cable is cheap. Relatively low-end managed switches are cheap and unmanaged switches are cheaper. Having multiple LANs doesn't cost all that much and is often a lot cheaper than buying more sophisticated switches that have the bandwidth and features to make the VLANs worthwhile.

          At the other end, we have also put in a fabric system with stacked switches at the TMS/LMS and VLANed pedestal switches at each screen that receive one CAT cable from each of the switches in the stack at the TMS/LMS. It is retrofittable to a traditional set up since both cables are already run to each screen. The advantages of this system are:
          • Fast speeds...typically 20gig from the TMS/LMS to the switch stack. 2gig to each screen. You could transfer content to many screens (12 for sure) at full speed as compared to a traditional system.
          • Redundancy. The stack will keep the system up if either switch in the stack fails as all screen switches have a cable from both switches. However, any management or media connection that are no the failed switch would need to plug into the remaining switch. Problems with either cable feeding a screen will not take that screen off line...you'll just drop to 1G transfers...just like a "typical" system.
          The downsides are
          • You are dependent on those switches and configurations. If a screen switch dies, you will need to replace it with a like switch and configure it. While you could throw an off-the-shelf switch in to get the screen running, it won't work with the stack until a switch with the necessary capabilities and VLANs are set up again. Be sure to have spare(s) on site.
          • Stacked switches have to match, it would seem. if one goes and you are past the point where you can obtain an identical model, you are replacing the stack so, again, keep spares (and configured).
          It is cheaper, however, in my opinion to use lower-end equipment as the cost of the more sophisticated stuff grows geometrically.

          Coming back to the router...with multiple LANs...the router, among other things, allows me to set the interaction I want/need for the various LANs to work. It, often, allows other things, where there are multiples, to cost less. When I set up Wifi (which is more and more now that things are often web-gui), I even have that on a separate LAN and the only way onto a network that actually has equipment will be via route with suitable access rules.

          Comment


          • #6
            Our usual setup involves managed switches with VLANs and a firewall or firewall cluster as "switchboard" between the different network zones and as demarcation zone to other networks. The reason for managed switches is standardisation, proper multicast support and remote monitoring. Reducing wiring can be a factor too, but as Steve indicated, Cat6 wiring is pretty cheap and also fiber doesn't cost an arm and a leg anymore. Still, it can add up in certain setups and not all locations allow for easy rewiring.
            Managed switches indeed require some "reprogramming" when being replaced, so if you need to rely on "unqualified remote hands" to do equipment replacements, I can understand a choice for unmanaged switches as easily deployable "cold spares". Our solution usually is to have two switches available and factor in sufficient free ports on either switch to allow equipment to be swapped between switches when necessairy.
            Personally, I also don't believe in stacks. While they offer an advantage in case of maneagability, stacks tend to create a single point of failure: A node crash often causes a stack to become unstable and in most cases, if you want to update firmware on a stack, you loose the entire stack.
            As firewalls we standardize on Fortinet Fortigates, as they have offered the best trade-off between functionality, reliability and costs. I generally tend to route all traffic between "zones" through the firewall. For high-availability setups we use Fortigate active/passive clusters, which have been proven to be pretty reliable. They're also usually the endpoint for management SSL and IPSEC VPNs.
            I want the networks we manage to be segmented from the rest and the firewall is a good demarcation point. It also avoids stuff like STP or other random broadcast traffic causing potential havoc to leak into your network or vice versa.

            While you could use a vanilla router or a managed switch with L3 capabilities as a router between VLANs, I usually prefer a proper firewall as the "router" between LANs or VLANs, simply because it allows you to properly segment the LANs from each other. This will furthermore increase the security of your setup and the chances of some cryptolocker or other bad stuff finding its route into your network via e.g. the IT network.

            Comment


            • #7
              I have worked in several shops that have networked equipment and every single one of them uses a separate network for the machines.

              Security is an important reason to have separate networks but, more importantly, your production equipment is mission critical. If there is a problem on the network that shuts your production line down, you are out of business until it gets fixed.

              Some bonehead in a back office, somewhere, could do something stupid which accidentally borks the network. There could be a virus/malware attack. Further, at a basic level, it's a whole lot easier to debug a networking problem if you don't have to worry about messing things up on the "other side." (e.g. If you are working on something in the office, you don't have to worry about doing something that might shut the production lines down or vice versa.)

              In a theater, I think of the projectors and other equipment as somebody in manufacturing would think of their production line. Your projectors are a production line of sorts. Aren't they?

              Like I said, security, hackers, viruses or just, plain "stupid people" are important reasons to segregate your networks but, at a basic, common sense level, it is vital to protect your mission critical equipment.

              Besides, it would sound really, really stupid if you had to tell a customer that you can't show movies, today, because your network went down!

              Comment


              • #8
                I too am a big fan of switches that run on the default config for each screen, as its a drop-in replacement from the local store to get it running again.
                But I must admit, the core switch, is best to be VLANed if you have the ability.

                The main reason for that is, yes its best to keep traffic from other screens off the physical wire between screens.. One main reason is, in writing my device detection programs for projection networks, I have discovered some very crappy embedded devices in use in the industry. If they get hit with to many packets to fast they lock up. The manufacturer will not fix it and require the sites to VLAN them of to only the devices they need to talk to to make them reliable.

                But then again, these days its less of a problem as even cheap embedded devices now have robust IP stacks with much more power to deal with whatever you through at them.

                But in general, for independents who have multiple service entities coming to do work. The less complex the network, the better, as in the long run, the more complex, the harder it is for those agents, and that just ends up costing the cinema more and more money as the service entire has to navigate the complexities of the networks.

                And, as a cinema owner, its very important to have multiple options, otherwise, as we have all experienced at some stage, over-the-top quotes for work. I have seen integrators drop quote values by 30% once they know there was competition. Australia is remote, BIG. The major integrators are owned by the major chains. (i.e. the independent's main competitors) Due to this dynamic, we have distorted behaviour regularly.

                I find it strange independents used to take a lot of ownership in how the older cogs and sprocket film projectors worked, but now its all digital, they prefer to hand over complete control to other entities with conflict of interest. They really need to take more responsibility in their core infrastructure, as looking after a film projector used to be.

                Comment


                • #9
                  I agree James. For all of my words above, my topology boils down to two concentric rings. One "Management" (control) and one "Media."

                  My networks are color coordinated. Blue (and some Orange...early on, we used Orange with CAT6 and on higher priority devices like the server and projector to make them quick to identify) is Management, Yellow is Media. For our QLAN's Black is QLAN-A and White is QLAN-B. Violet is AES3 (which isn't network but it uses CAT cable as everything else seems to now). Grey (and sometimes black) is for dedicated A/V and shielded and again...not network but things like HDBaseT or other A/V based twisted pair stuff.

                  So, from a contractor or theatre owner standpoint, you can visually see what everything is very quickly without needing to figure it out. I don't really need to VLAN things because there is a dedicated switch for the "Management". The Media switch tends to have two VLANs so that the management has a pathway to it for support (it has no gateway or other means to get off network).

                  The QLANs are managed switches as they have PTP and Multicast requirements that are pretty strict and are moving audio and possibly video on them. If I have an entire complex running off of a QSYS CORE, you really don't want that one having any issues as it affects all screens that are on it. It's a hard concept for some that the separate theatres are not separate from the sound's perspective. They are all on or off ramps for the audio (or control, or video). Naturally, those switches are all managed but again, I don't use high-brow switches...just ones that do the job. The cost of getting switches with the bandwidth to move the worst case situation that could theoretically happen with video (I'm already covered on audio), would, easily, jack the cost of 4-5 times. And, that also means that if the switch needs to be replaced...it too will cost 4-5 times as much. I reserve using those switches (the really high bandwidth ones) for when moving video as they need jumbo frames (really wide super-highway). It just is more cost effective.

                  Comment

                  Working...
                  X