Leo, It will be interesting to see what happens when some of the memory in these ssd's start to fail... It's a good thing it is mostly read only, but there are also other things constantly being written to it such as logs. Anyway, what brand of drive did you use?
Announcement
Collapse
No announcement yet.
SSDs in a DSS200?
Collapse
X
-
SSDs are used in Desktop computers for years. We have one in our projection room that runs since 2012 or so, and it is operated every day, constant OS system file writing, swap file, etc. I guess on a DCP server, you hardly ever touch the write limits for modern SSDs. If you do, the drive will probably fail similarly as a mechanical hard drive with a 'RAID error' reported to the DSS software. I'm pretty sure these SSDs, if of decent quality, would live longer than normal hard drives. They will also create less heat in the server than 7200rpm drives.
- Likes 1
Comment
-
Originally posted by Carsten Kurz View PostSSDs are used in Desktop computers for years. We have one in our projection room that runs since 2012 or so, and it is operated every day, constant OS system file writing, swap file, etc. I guess on a DCP server, you hardly ever touch the write limits for modern SSDs. If you do, the drive will probably fail similarly as a mechanical hard drive with a 'RAID error' reported to the DSS software. I'm pretty sure these SSDs, if of decent quality, would live longer than normal hard drives. They will also create less heat in the server than 7200rpm drives.
Comment
-
Originally posted by Mark GulbrandsenAnyway, what brand of drive did you use?
Originally posted by Mark GulbrandsenLeo, It will be interesting to see what happens when some of the memory in these ssd's start to fail... It's a good thing it is mostly read only, but there are also other things constantly being written...
I've only personally experienced one SSD fail, and that was a case of infant mortality. Around this time last year, Monoprice or Woot or one of these sites was doing an offer for no-name 2TB ones for $150 (about half the typical going rate for one at the time). Thinking that the extra space on my laptop would be useful for holding test and demo DCP content (for uploading into servers via FTP in the field), I bought one, cloned the 1TB drive then in the laptop to it and expanded the data partition to fill the rest of the space, then installed it. Worked great for about a week, then the laptop suddenly refused to boot - the BIOS couldn't even see the drive. Thankfully, I hadn't wiped or repurposed the previous drive, and so was able to reinstall it and return the failed 2TB one for a refund.
BTW, there are various tools that will read the SMART parameters of SSDs and give you early warning of potential failure. I like CrystalDiskInfo.
The SSD RAID in the DSS200 doesn't seem to have increased ingest speed significantly, if at all - maybe 5-10%. I suspect that with the spinning rust drives as installed at the factory, the server was operating near the transfer speed limit either of the SATA interface on the motherboard (to which the CRU reader for ingest is connected), or the 3Ware RAID card, or both. In contrast, when I put SSDs in the Alchemy mentioned above, the ingest speed (from a CRU reader connected by USB3) shot up from around 50-55 MBPS to 85-90.
Originally posted by Steve GuttagThere are definitely drive adapters that already position the drive and have the appropriate connector for use in a 3.5" blade server slot. They are about $12 each on Amazon.
Comment
-
I agree that the drives will likely outlast the Dolby. I'd also like to know if these drives would work ok in other servers. Obviously the manufacturers like to sell drives as it's more profit. That's the only reason I can think of why they have never recommended SSD's. I actually asked GDC about this when I was at the original training class in Burbank some years back. And they never gave a definitive answer.
Comment
-
Whereas one could always source their own drives, I don't buy into the the profit motive as the choice of HDD versus SDD. The SDD cost/size ratio wasn't there when the box servers were coming out and the SSD wasn't as reliable.
When Digital cinema servers started, even HDD were in the 400-500GB size range. It was a big deal when they finally started putting in 1TB drives. Older servers had to get firmware updates just to handle those HUGE drives...we again saw another wave when the 2TB drives became common place. Each time, SSDs would have either been not available or crazy expensive. Also, firmware updates were required for the 2TB drives to be used. Now, a 2TB drive is a "legacy" drive.
For the integrated servers like the IMS and ICMP, they still use spinning rust for 1TB drives but I fail to see the point there unless they just can't find a suitable 1TB SSD...the cost of those are just about identical to HDD. There is still a HUGE jump when you get into a 2TB SSD in a 2.5" package...they are, typically, over $200/drive for the kind that will do well in a cinema server.
Comment
-
Originally posted by Mark GulbrandsenI'd also like to know if these drives would work ok in other servers.
Originally posted by Steve GuttagWhereas one could always source their own drives, I don't buy into the the profit motive as the choice of HDD versus SDD.
Also agreed that the price difference between SSDs and HDDs in sizes up to 1TB is now low enough that SSDs are increasingly making sense as a use case for screen servers. Though even 2TB ones are now coming down: they're around $200 for a desktop one and $300 for a NAS-optimized one now, compared to $300 and $500 a year ago.
Comment
-
Even the "approved" list doesn't mean other drives can't be used. Just use "enterprise" drives and you're pretty much OK. Obviously not in a warranty server, but why would you? Cheap standard drives may even work fine but I haven't had the nerve to try them, I have found them in servers that did show buffer underruns but these have been poorly maintained overall (surprised?) so can't blame that on those drives.
I doubt that SSDs give any benefit but if you want to try them, go ahead.
Comment
-
Originally posted by Steve Guttag View PostI'm curious if the boot-up time is improved on the DSS servers. As for content ingest/play, I can't see them adding anything. As it is, the DSS server can ingest at full speed and playback...something I wish the other servers could do.
Comment
-
The advantage is not primarily speed. There was a slight gain, but not much. As I wrote earlier, the ingestion speed gain was 10-15% at most (though much more significant after putting SSDs in the Alchemy). There was likely a similar gain in boot-up time (on the DSS200) - possibly a minute was saved from power on to the Show Manager UI appearing.
The reason for doing this was reliability and longevity rather than read/write performance. A HDD is worn by the time it spends spinning, which is 24/7 in a server application; whereas the SSD is worn primarily by write cycles. Given the overall size of the content RAID and the volume of DCPs that goes through that particular server, I'd be surprised if each individual memory address is written to more than once a month. Yes, there is the operating system stuff, but I don't think that's significant. I have a 128GB SSD on an older laptop installed in 2015, and CrystalDiskInfo still reckons that it has 91% of its useful life remaining. At that rate, it'll last for over half a century.
- Likes 1
Comment
Comment