Just had my first interaction with the new DCDC satellite delivery system, and I’m curious about something.
When the digital conversion happened I installed a wired network through my drive in, underground CAT6, all gigabit pro gear, the network spans across the majority of 24 acres but no excessively long runs, my booths and buildings are situated just right to where I could break off at a network switch, I think my longest run of cable is about 175 feet between two network switches. I also have extremely low latency even when the network is under heavy load, if I ping something on the farthest end of the network I will always get between 1-3 ms latency, no more.
I’m showing that my Kencast server is connected at 1 gigabit on the network. But when I try to pull a movie from it with an ftp ingest, based on the size of the content from what I can tell it is barely transferring at 80 Mbps. It takes about 3.5 hours to transfer a movie from my Kencast into my server’s storage when network activity is at its lowest. If I ftp between two of my SR1000 IMB’s they can transfer an entire movie in about 40 mins or less which tells me they are transferring around 500 Mbps. Heck even my old SX 2000AR can hit these speeds. Why can’t my Kencast do this?
I don’t believe my network is any of the issue. The gentleman who installed the Kencast did tell me they gave me a very old unit, one of their first generations. I don’t know enough about them to be able to tell myself. Technically I could pull the external CRU out of the Kencast and take it to a server and ingest, but I want to be able to ftp because I have a network that can support it at decent speeds.
I’m thinking either the bottleneck is happening in the processor of the Kencast, or whatever hard drives (or possibly SSD) they have in the raid array are not high performance.
Does anyone know much about these systems?
When the digital conversion happened I installed a wired network through my drive in, underground CAT6, all gigabit pro gear, the network spans across the majority of 24 acres but no excessively long runs, my booths and buildings are situated just right to where I could break off at a network switch, I think my longest run of cable is about 175 feet between two network switches. I also have extremely low latency even when the network is under heavy load, if I ping something on the farthest end of the network I will always get between 1-3 ms latency, no more.
I’m showing that my Kencast server is connected at 1 gigabit on the network. But when I try to pull a movie from it with an ftp ingest, based on the size of the content from what I can tell it is barely transferring at 80 Mbps. It takes about 3.5 hours to transfer a movie from my Kencast into my server’s storage when network activity is at its lowest. If I ftp between two of my SR1000 IMB’s they can transfer an entire movie in about 40 mins or less which tells me they are transferring around 500 Mbps. Heck even my old SX 2000AR can hit these speeds. Why can’t my Kencast do this?
I don’t believe my network is any of the issue. The gentleman who installed the Kencast did tell me they gave me a very old unit, one of their first generations. I don’t know enough about them to be able to tell myself. Technically I could pull the external CRU out of the Kencast and take it to a server and ingest, but I want to be able to ftp because I have a network that can support it at decent speeds.
I’m thinking either the bottleneck is happening in the processor of the Kencast, or whatever hard drives (or possibly SSD) they have in the raid array are not high performance.
Does anyone know much about these systems?
Comment