So, it seems that the process of full adaptation of portable disks is underway, but the difficulties are becoming more and more from that x)
Announcement
Collapse
No announcement yet.
Barco icmp - hard drives upgrade
Collapse
X
-
Originally posted by Carsten KurzFor most drives, 'Read Retry' can be configured using manufacturer tools. They only come into play anyway after the drive starts to degrade. That's why consumer drives usually work nicely (even mixed drive types) when new.
/sys/block/<deviceName>/device/timeout
The presence of this setting depends a bit on your kernel version, but if you reduce this to 0, then your machine will not wait for any bad sector to be re-read and gladly continue. This helps if you're trying to recover a bad drive using e.g. "dd" and don't want to wait forever for the process to complete.
Originally posted by Steve Guttag View PostOddly, WD was my go-to company for consumer stuff. To this date, WD "Black" drives have never failed me...either in 2.5" or 3.5". WD Blue, that's another story. The WD Blue 2.5" drives have been a bit of a disappointment, particularly some sizes.
Since WD bought out and took over Hitachi...they went from my most reliable to "varies from batch to batch." Seagate, which has, historically, ALWAYS let me down, seems to be doing better on their Enterprise drives. I see that Barco has gone to Seagate on their 2TB and 4TB drives
I've noted that Dolby, on their approved drives, have gone to Toshiba and less so to WD/Hitachi.
Me, personally, for SSD 2.5" laptop drives, I've gone to Samsung as nobody seems to have bad things to say about them. I have yet to have one give me grief and recently switched a desktop drive out for a Samsung SSD with nothing but improvement.
If you look at those Backblaze numbers over time, then you see that reliability seems to be more tied to the following factors, only the first one may be a bit surprising, the other two are rather obvious:- Certain models/batches seem to be more reliable than others. But this doesn't seem to be tied to a particular type of drive, it's more like the manufacturer got lucky by producing a bunch of disks without inherent flaws. Sometimes, a manufacturer produces a batch with really bad numbers, which is probably due to a manufacturing or design flaw.
- Numbers across manufacturers over time vary. While some manufacturer may be "good" now, that doesn't mean they were "good" in the past or will be "good" in the future.
- The disks with the biggest storage capacity often seem to be among the least reliable: New models seem to be inherently unreliable.
- Old disks also tend to get unreliable.
Comment
-
Well, yes and no. When it comes to this industry and say the 1TB drives, when they are in vogue, the Hitachi drives had more than a "lucky" record. They were so dominate that I saw not only myself gravitate to them, I saw most every server manufacturer to them as well. They transcended batches too. The failure rates were exceedingly rare. In fact, we're starting to pull them after 7+ years of reliable service as they are finally just wearing out from mostly running 24/7.
Seagate has seemed to have just enough bad drives to make one frustrated and there was that era, as you mentioned, where again, it wasn't just luck (or bad luck) where they were dropping like flies. I have seen Seagate drives go for seemingly forever.
The sample size for me on personal computers is certainly a small one. However, I've still noted that EVERY drive that didn't seem to go the distance was a Seagate. Every WD I put in to replace it seemed to last the rest of that computer's life. Small size of sample but certainly a trend.
For cinema servers, the same size is significantly higher since we buy drives by the case and have about 1000 drives spinning at any one time. Not huge in the computing/server farm world but again, can show trends on longevity. If the results were truly random, I'd agree that the sample size is too small but I've found that you can find trends that span across batches. Here is one...if you have a Dolby DSS200 server that has worked fine for years (2 or more) but, all of the sudden, refuses to boot up anymore...I'd almost wager you have WD 2TB drives in there (I believe WD2003). The SMART data will show them as fine without errors or bad sectors. If you rapid cycle the server (just a fast power off/on), it will boot up. I've only EVER seen that with WD drives. Get rid of them, you get rid of the problem. Small sample size but still enough to identify the problem conclusively to the WD drives (it also happens on the various versions of the RAID card used by the DSS200 or DSL200, for that matter).
The current WD/HGST 1TB drives, I've had more DOA ones than ever as of late.
When a manufacturer makes something, they have a lot of choices in how they do it. There is their software/firmware/hardware for controlling the drives but also there is how the drives are made, what parts they are going to use and just how expensive that drive is going to cost THEM. If they choose to save a penny or so on a bearing or head or motor...etc...all of that can come back and turn a drive that does not last as long as others or as long as it should.
Comment
-
Sure, you can see some trends in those numbers, but the picture is just wildly incomplete. So incomplete, that it's impossible for me to make any real recommendation regarding what hard-drive manufacturer to choose, at least at this point in time. Even the Backblaze numbers are incomplete, as they don't test all drives, they focus on particular models that fit their usage scenario, but it's one of the best sources on reliability information regarding those essential pieces of data storage we all rely on...
I agree that we've also had very good experiences with the Hitachi/HGST drives when they were still Hitachi, I can't really remember even a single failed drive. Now that they're WD drives, I suspect that most of the newer models are just re-branded WD drives and they perform equally. I've never had any real bad luck with WD though, their drives usually seem to operate within the margin of expected failure rates.
I guess the issue with those WD drives comes down to a seized motor. I've had quite a few drives suffer this fate, among them both Seagates and WD drives though. Apparently it's often a problem with the wrong kind of lubricant being used. It's counter-intuitive, but you can sometimes revive them temporarily by knocking on them, at least when you can reach them when powered up... Usually, those should eventually show up in the "Mechanical Start Failures" of your SMART data though.Last edited by Marcel Birgelen; 04-20-2021, 09:36 AM.
Comment
-
Did anyone have any luck with installing 2TB WD REDs SSD (SA500) into ICMP (non-X)? I am aware it's not "validated by Barco" but I am willing to try it, considering the price (and some success stories with them in IMS2000).
My only doubt is that the Barco lists mentions SSD only for a ICMP-X and non-X with GEN2 storage controller with software RAID. Should I expect it to not work on older hardware revisIon?
thanks
Comment
-
Among other things, one thing that an SSD works with that might not be there on GEN1 storage controller is TRIM.
If that is the only significant difference, the drives will work all right, but given that they are meant to write large amounts of data often, they will ware faster.
I guess the same goes for IMS2000.
"Success" is a rather subjective term. Where people see it, other miss it. It all depends to what the goal is.
Comment
-
3 disks writing est 300gig ave per week (2 films), say cut in half as its raid. 150gig per week is not a lot for a SSD. An operating system like windows can write quite a lot especially if gaming on it etc or getting into swap.
I would consider that medium to minimal use and you should expect a long reliable lifespan.
Trim wise. Google...
---
Is trim necessary for SSD?
No matter what name it goes by, Trim works with Active Garbage Collection to clean up and organize your solid state drive. Trim is beneficial, but not mandatory. Because some operating systems do not support Trim, SSD manufacturers design, create, and test their drives assuming that Trim will not be used.
---
This is especially less impactful as larger files are predominately used. (For example, OS drives are writing millions of small files and updates. a Data Disk (containing CPL essences) with huge files are not effected my small files.
Comment
-
I think Leo confirmed that he successfully used non-approved drives in first-generation, out-of-warranty ICMPs without issues. Seems the first generation RAID controller is more flexible than Barco suggests. Especially, it does not seem to exclude non-approved drives by checking device IDs or the like.
Al at your own risk, of course. SSD prices have come down so far lately, that I may try 3*2TB SSDs in our ICMP soon as well.
- Likes 1
Comment
Comment