Announcement

Collapse
No announcement yet.

IMS3000 infected with malware

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • IMS3000 infected with malware

    Thought this would be worth writing up as a heads up.

    This IMS3000 is installed on a very large university campus with a serious and proactive IT security team. A few days ago, the user reported slow ingestion, pauses/stutters during playback, and difficulty downloading a "detailed report" log package when I asked for one. When one was finally obtained, the log analysis had multiple "out of memory" errors, which I'd never seen before.

    image.png

    Very shortly afterwards, the school's IT guys raised the alarm, stating that they saw network traffic suggesting that the IMS3000 was being used for cryptocurrency mining and trying to make contact with a remote site, the address and port of which were known to belong to bad actors.

    When I got to the site yesterday, my intention was to download a configuration backup, "nuke" the bootflash drive by connecting it to a PC using a 9-pin Dupont to USB adapter, rewrite a fresh system image to it, reboot the IMS3000, and install the backup.

    First alarm bell: the configuration settings .dbk file I downloaded was 55 megabytes (as were all the auto backups that it had done)! As most of you will know, it's almost unheard of for one of these files to be bigger than one megabyte. I scanned it with Windows Defender, which found this:

    image.png

    The next problem was that using the Dupont to USB adapter, three separate computers simply wouldn't see the bootflash drive (even as a device, let alone any partitions) at all. I tried it on my dual booting laptop, booted into both Windows and Ubuntu, a Windows PC in the booth, and a Mac desktop in the booth. In the end, I had to create an emergency boot USB stick. Needless to say, I reconstructed the configuration settings manually, and did not restore the infected backup.

    The IT security people have scanned the external NAS for this IMS3000, and concluded that its operating system is not infected. Based on their analysis of network activity, they believe that the infection most likely happened via an infected DCP shipping drive, and narrowed it down to a time window that implicates one culprit. Annoyingly, that drive has now been returned and is no longer in the booth for analysis.

    Dolby are going to ship a new bootflash drive to us, but I would like to be able to figure out why I wasn't able to get in to the infected one using the Dupont to USB adapter.

    Again, I thought this worth mentioning, in case anyone is experiencing similar symptoms, or does in the future. The most obvious evidence of the infection is the abnormally large size of the backup file, combined with "out of memory" errors in the log analysis.

  • #2
    Somebody tried to ingest a car.

    I didn't even know that could happen, let alone through DCP shipping drive.

    They should inform the distributor as soon as possible.

    Comment


    • #3
      I don't feel comfortable naming the distributor involved, as we no longer have the drive and therefore don't know for sure that it was the source of the infection. We will let them know. It wasn't Deluxe. The school's IT security person told me that apparently, a Linux-formatted drive can contain an instruction in the boot sector to execute a Bash script immediately the volume is mounted, and that script then installed the malware. He suspects that this is what happened.

      Comment


      • #4
        I'm not going to claim I'm the all-knowing expert in this kind of things, but I really don't buy the "infected disk" story, as it doesn't make any real sense to me. I rather suspect the infection has come from another infected host on the same network, exploiting an unpatched service running on the DCP.

        Even if a distribution drive would contain a virus, I see no way how this is going to get executed via a normal DCP ingest procedure. A normal Linux filesystem mount doesn't involve the automatic execution of code on the drive and neither does the ingestion of the DCP itself. The much more likely way is that a machine on the same network got infected and started to scan for exploitable hosts on the same network and used something like the recent SSH exploit. The fact that it was being used for crypto mining, which doesn't make sense on an IMS3000, which is a rather underpowered system, also indicates that this wasn't really a targeted attack, more like collateral damage. This only enhances my hunch that the attack vector was the network and not someone's distribution drive.

        Comment


        • #5
          Is that thing running on Windows?

          Unless something has changed recently (which isn't beyond the realm of possibility) I don't think that can be done on Linux. At least not without booting the entire kernel upon mounting the drive and I think most people would notice if the machine suddenly rebooted in that circumstance. Wouldn't they?

          I just found this: https://unix.stackexchange.com/quest...ub-boot-option

          Comment


          • #6
            Originally posted by Leo Enticknap View Post
            The school's IT security person told me that apparently, a Linux-formatted drive can contain an instruction in the boot sector to execute a Bash script immediately the volume is mounted, and that script then installed the malware. He suspects that this is what happened.
            I suspect, the guy has no clue. If this was possible, all alarm bells would've gone off long time ago. Either that, or the guy just detected a new MAJOR zero-day security issue.

            Comment


            • #7
              The only way anybody will know for certain what has happened is by following up on the theory that it was the drive. If you can get that drive back (unmodified) you can let the school prove their claim. Even then you might want an independent analysis. Maybe the distributor will take the situation seriously and seek to confirm that their disk is not at fault. Because if that drive was infected then it probably wasn't the only one. But... you can't believe anything anyone says anymore. So in the end you won't really know.

              I think blame seeks the path of least resistance.



              Comment


              • #8
                Originally posted by Leo Enticknap View Post
                Thought this would be worth writing up as a heads up.

                This IMS3000 is installed on a very large university campus with a serious and proactive IT security team. A few days ago, the user reported slow ingestion, pauses/stutters during playback, and difficulty downloading a "detailed report" log package when I asked for one. When one was finally obtained, the log analysis had multiple "out of memory" errors, which I'd never seen before.

                image.png

                Very shortly afterwards, the school's IT guys raised the alarm, stating that they saw network traffic suggesting that the IMS3000 was being used for cryptocurrency mining and trying to make contact with a remote site, the address and port of which were known to belong to bad actors.

                When I got to the site yesterday, my intention was to download a configuration backup, "nuke" the bootflash drive by connecting it to a PC using a 9-pin Dupont to USB adapter, rewrite a fresh system image to it, reboot the IMS3000, and install the backup.

                First alarm bell: the configuration settings .dbk file I downloaded was 55 megabytes (as were all the auto backups that it had done)! As most of you will know, it's almost unheard of for one of these files to be bigger than one megabyte. I scanned it with Windows Defender, which found this:

                image.png

                The next problem was that using the Dupont to USB adapter, three separate computers simply wouldn't see the bootflash drive (even as a device, let alone any partitions) at all. I tried it on my dual booting laptop, booted into both Windows and Ubuntu, a Windows PC in the booth, and a Mac desktop in the booth. In the end, I had to create an emergency boot USB stick. Needless to say, I reconstructed the configuration settings manually, and did not restore the infected backup.

                The IT security people have scanned the external NAS for this IMS3000, and concluded that its operating system is not infected. Based on their analysis of network activity, they believe that the infection most likely happened via an infected DCP shipping drive, and narrowed it down to a time window that implicates one culprit. Annoyingly, that drive has now been returned and is no longer in the booth for analysis.

                Dolby are going to ship a new bootflash drive to us, but I would like to be able to figure out why I wasn't able to get in to the infected one using the Dupont to USB adapter.

                Again, I thought this worth mentioning, in case anyone is experiencing similar symptoms, or does in the future. The most obvious evidence of the infection is the abnormally large size of the backup file, combined with "out of memory" errors in the log analysis.
                Was this a University we are both familiar with? The IT staff I worked with back in the day was pretty good, but they had no clue about how the whole D-cinema ecosphere worked.

                It will be interesting to see the final report on exactly how this happened, and what, if anything, can be done to prevent it.

                Comment


                • #9
                  Originally posted by Bruce Cloutier View Post
                  I think blame seeks the path of least resistance.
                  As do most of those exploits. It's probably the simplified version of Occam's razor...

                  We need to be aware that much of this cinema equipment we're using may be pretty vulnerable to many of those exploits out there. Many of this equipment hasn't seen patches in years and the recent OpenSSH exploit was a rather big one, affecting more than 14 million publicly exposed machines and probably many more locally. And yeah, the IMS3000 is vulnerable to those exploits, as is the IMS2000, the IMS1000 and a whole lot of other gear with OpenSSH enabled and without the latest patches applied. There are active exploits happening right now.

                  Since the route of infecting a cinema server with an "infected disk" is virtually impossible and this seems to be one of the more common types of malware out there, simple reasoning suggests this came via an infected host on the network, which could be anybody plugging in their notebook onto the same subnet, a management workstation or even another (Linux) server on the network.

                  I'm pretty allergic for made-up b.s. If you don't know how stuff works, fine, but don't make shit up, which is what this IT guy was clearly doing. If he can prove that this came in via an "infected disk", I will retract my statements, but I really doubt I need to do so... It simply worries me that security folks can be this clueless, maybe that's why this shit keeps happening all the time in the first place...

                  So, instead of trying to hunt down a ghost drive, try to hunt down the host that probably started this infection. Look what DHCP leases were given out on that subnet towards "temporary users" like notebooks or other mobile devices, what Mac addresses were associated with them? Are they known within your network? Did you already scan all the other hosts on the network for an infection?!

                  Comment


                  • #10
                    I remember installing a server and shortly afterwards the customer called asking why we didn't upgrade the software - it turned out that the IT dept scanned the network and found the server (I think it was a DSS) which was running a VERY old kernel which raised a massive red flag at the IT department.

                    Linux is not kept up to date on those servers - in fact I remember a Doremi server which received a Kernel update... and then a downgrade some time later as the new kernel wouldn't work well with the existing software and clearly fixing the software was too complicated.

                    Those servers after all are supposed to run behind a firewall and nothing is supposed to be running from external devices as Marcel said. I am no expert on the subject but I'd agree that if an infected machine found an old linux kernel available on the network, it would be like Christmas for some malware.

                    It's an interesting accident though, I cannot remember seeing a virus/malware on a cinema server in the many years I've worked with them. I'm sure the manufacturer would be interested in investigating more.

                    It simply worries me that security folks can be this clueless, maybe that's why this shit keeps happening all the time in the first place...

                    Comment


                    • #11
                      Very weird. I know not of any boot sector or partition flag that can make a unix system run a script. Only udev configuration can do that. So I call B S on that.
                      The older ssh daemons. Yes there are some issues with older ssh servers but typically the issues are not easily attacked. It takes effort and as such I don't see it being likely.
                      However, I would tend to believe it may have been a worm trying well known passwords. I wouldn't be surprised if the passwords for Dcinema equipment have made their way onto those widely utilised generic password lists.

                      It's likely the cause as its likely the simplest path.

                      Comment


                      • #12
                        Originally posted by Frank Cox
                        Is that thing running on Windows?
                        Debian 7.10, so Linux-based. But the weird thing was that the infected settings backup zip had Windows malware executables in it.

                        I was also skeptical about the "autorun" on drive mount theory, for all the reasons others have given; but their IT guy believed (as of yesterday, at any rate) that this was the most likely explanation.

                        Originally posted by Marco Giustini
                        I'm sure the manufacturer would be interested in investigating more.
                        They are: Dolby has asked me for the infected OS flash drive, which of course we're happy to give them. They already have the infected settings backup zip. Hopefully they'll be able to figure out how the malware got in, and come up with a hotfix that closes that method of entry.

                        Originally posted by James Gardiner
                        I wouldn't be surprised if the passwords for Dcinema equipment have made their way onto those widely utilised generic password lists.
                        Unusually, this IMS3000 has had all its passwords changed from the factory defaults, per the school's policy.

                        Originally posted by Tony Bandeira, Jr.
                        Was this a University we are both familiar with?
                        It's not the university I think you think it is (with the 16/35 Kinotons in a split level booth). It is a similarly sized and operated campus, though.

                        Comment


                        • #13
                          Originally posted by James Gardiner View Post
                          Very weird. I know not of any boot sector or partition flag that can make a unix system run a script. Only udev configuration can do that. So I call B S on that.
                          The older ssh daemons. Yes there are some issues with older ssh servers but typically the issues are not easily attacked. It takes effort and as such I don't see it being likely.
                          However, I would tend to believe it may have been a worm trying well known passwords. I wouldn't be surprised if the passwords for Dcinema equipment have made their way onto those widely utilised generic password lists.
                          If the IMS3000 doesn't have ASLR enabled, then with the latest SSH vulnerability, it takes about 10k operations for a successful attack. There have been servers compromised this way, it's just not very likely to succeed if you're running a modern machine with ASLR enabled. Generally, there should not be a reason not to run ASLR, other than some compatibility with some legacy binaries that weren't compiled with ASLR in mind.

                          Originally posted by Leo Enticknap View Post
                          Debian 7.10, so Linux-based. But the weird thing was that the infected settings backup zip had Windows malware executables in it.
                          It's not uncommon for a malware package to have Windows executables within. Your infected host usually also deploys a network scanner that's constantly on the lookout for potentially vulnerable hosts. If it finds a Windows candidate, it's where those Windows executables come into play.

                          Since it's obviously a generalized malware payload, I don't expect any sophisticated, targeted attack. Those things, 99% of the time, come over the network. If it wasn't as an attachment to a mail, a zero-day in someone's browser, then it was through a vulnerable service. I mentioned OpenSSH, because quite recently, a pretty serious attack was published, but Dolby may be running countless of other stuff that's not up-to-date. Since you're not supposed to connect your IMS to the public internet without a proper firewall, the impact is usually limited, but it only takes one infected host on the network...

                          Originally posted by Leo Enticknap View Post
                          I was also skeptical about the "autorun" on drive mount theory, for all the reasons others have given; but their IT guy believed (as of yesterday, at any rate) that this was the most likely explanation.
                          You can believe James, Frank and rambling me... There is no such thing and if there ever would be, it would be a security hole making headlines.

                          Usually, when such an incident happens, we pull the plug on that network, completely shutting it down, to prevent further potential damage. We will only re-enable stuff if we have concluded that:

                          1. We've found the attack vector and were able to close it.
                          2. If the attack vector could not be identified, we made sure that all machines on the network were thoroughly scanned and updated to the latest security levels.
                          3. We could establish whether or not personal information has been stolen. This last one is actually a legal requirement in the E.U., whenever you've encountered a potential data breach. You're also legally required to file a report with your country's privacy authority. This being a network with projection equipment, the damage in this regard would probably be minimal, but even stuff like employee's names from e.g. an Active Directory count as personal data.​

                          Comment


                          • #14
                            Stepping back on this topic. It's a miracle that it has taken this long for something like this to happen.
                            It also a good reminder to treat all projection networks as separate from front of house. Plus ensure projection networks are physically secure and unroutable to everything but those systems that "need" to communicate with projection equipment.
                            .

                            Comment


                            • #15
                              I try to do just that. I tell all of my customers...the best form of security on the booth network is to not let anything physically on it. There is no need. POS interaction is through the firewall (if the POS is on site...some are now cloud based) with a very narrow pathway, service and port. Some are now advocating that the booth network lose all capability with the outside (no gateway, no WAN, per say) with, perhaps just the TMS being the sole device that can have connectivity out of the booth, if need be (particularly if the TMS is the NTP for the equipment though one could set up a dedicated NTP source to eliminate that need too).

                              Knock on wood, I haven't had issues. I have come to blows where a client wanted their IT to control the booth network and integrate it with the rest of their plant. To which I responded..."fine, then it is ALL theirs...all problems too. Don't call me when things don't work/talk to each other, transfer...etc.. 99 times out of 100...they back off of that plan and agree to having to go through my firewall for any outside interaction.

                              To Marco's point...I too ran into an IT department (at a quasi-government facility) where they flagged the DSS server as not meeting some security criteria. They didn't require us to replace it (this was back in the late series 1 days too) and isolated off that network (which is what I wanted in the first place)...so, the A/V network was an island...perfect.

                              Comment

                              Working...
                              X