Yeah, nvme drives show how little space the storage takes up. Just stick a bunch of them inside the 3.5" format, along with a controller and cooling, and that would be great for a large/slow (relative to NVME) drive capped by SATA speeds.
I don’t miss the noise hard drives make, plus it’s nice to not really worry as much about what kind of magnetic activity might be going on around it, like is my subwoofer too close or what if my kid somehow gets her hands on a powerful magnet and wants to see if it will stick to my PC case.
Passive cooling could be enough. Even a bunch of ssd chips wouldn’t take up all of the vertical space, so top of the case could just be a heat sink. Though it might need instructions to only install it in an enclosure that has a fan blowing air past it (and not use the spots behind the mobo that don’t get much airflow).
A lot of motherboards come with metal styling that acts as a heat sink for nvme drives without even using fins, though they still have more surface area than a 3.5" drive and only have to deal with the heat from one or two chips.
But maybe it isn’t realistic and that’s why we don’t see SSDs like that on the market (in addition to price).
They seem to be very hit and miss in that there are some models with very low failure rates, but then there are some with very high.
That said, the 36 TB drive is most definitely not meant to be used as a single drive without any redundancy. I have no idea what the big guys at Backblaze for an example, are doing, but I’d want to be able to lose two drives in an array before I lose all my shit. So RAID 6 for me. Still, I’d likely be going with smaller drives because however much a 36 TB drive costs, I don’t wanna feel like I’m spending 2x the cost of one of those just for redundancy lmao
I’d want to be able to lose two drives in an array before I lose all my shit. So RAID 6 for me.
Repeat after me: RAID is not a backup solution, RAID is a high-availability solution.
The point of RAID is not to safeguard your data, you need proper backups for that (3-2-1 rule of backups: 3 copies of the data on 2 different storage media, with 1 copy off-site). RAID will not protect your data from deletion from user error, malware, OS bugs, or anything like that.
The point of RAID is so everyone can keep working if there is a hardware failure. It’s there to prevent downtime.
It’s 36 TB drives. Most people are planning on keeping anything legal or self-produced there. It’s going to be pirated media and idk about you but I’m not uploading that to any cloud provider lmao
These are enterprise drives, they aren’t going to contain anything pirated. They are probably going to one of those cloud providers you don’t want to upload your data to.
You couldn’t afford this drive unless you are enterprise so there’s nothing to worry about. They don’t sell them by the 1. You have to buy enough for a rack at once.
The longer the time, the more likely the rebuild will fail.
That said, modern raid is much more robust against this kind of fault, but still: if you have one parity drive, one dead drive, and a raid rebuild, if you lose another drive you’re fucked.
Just rebuilt onto Ceph and it’s a game changer. Drive fails? Who cares, replace it with a bigger drive and go about your day. If total drive count is large enough, and depends if using EC or replication, it could mean pulling data from tons of drives instead of a handful.
It’s still the same issue, RAID or Ceph. If a physical drive can only write 100 MB/s, a 36TB drive will take 360,000 seconds (6000 minutes or 100 hours) to write. During the 100-hour window, you’ll be down a drive, and be vulnerable to a second failure. Both RAID and Ceph can be configured for more redundancy at the cost of less storage capacity, but even Ceph fails (down to read only mode, or data loss) if too many physical drives fail.
I would not risk 36TB of data on a single drive let alone a Seagate. Never had a good experience with them.
The only thing I want is reasonably cheap 3.5" SSDs. Sata is fine just let me pay $500 for a 12TB SSD please.
Yeah, nvme drives show how little space the storage takes up. Just stick a bunch of them inside the 3.5" format, along with a controller and cooling, and that would be great for a large/slow (relative to NVME) drive capped by SATA speeds.
I don’t miss the noise hard drives make, plus it’s nice to not really worry as much about what kind of magnetic activity might be going on around it, like is my subwoofer too close or what if my kid somehow gets her hands on a powerful magnet and wants to see if it will stick to my PC case.
HeatDidn’t read your full comment sorry. How would heat control work? Integrated fan?Passive cooling could be enough. Even a bunch of ssd chips wouldn’t take up all of the vertical space, so top of the case could just be a heat sink. Though it might need instructions to only install it in an enclosure that has a fan blowing air past it (and not use the spots behind the mobo that don’t get much airflow).
A lot of motherboards come with metal styling that acts as a heat sink for nvme drives without even using fins, though they still have more surface area than a 3.5" drive and only have to deal with the heat from one or two chips.
But maybe it isn’t realistic and that’s why we don’t see SSDs like that on the market (in addition to price).
They seem to be very hit and miss in that there are some models with very low failure rates, but then there are some with very high.
That said, the 36 TB drive is most definitely not meant to be used as a single drive without any redundancy. I have no idea what the big guys at Backblaze for an example, are doing, but I’d want to be able to lose two drives in an array before I lose all my shit. So RAID 6 for me. Still, I’d likely be going with smaller drives because however much a 36 TB drive costs, I don’t wanna feel like I’m spending 2x the cost of one of those just for redundancy lmao
Could you imagine the time it would take to resilver one drive… Crazy.
Repeat after me: RAID is not a backup solution, RAID is a high-availability solution.
The point of RAID is not to safeguard your data, you need proper backups for that (3-2-1 rule of backups: 3 copies of the data on 2 different storage media, with 1 copy off-site). RAID will not protect your data from deletion from user error, malware, OS bugs, or anything like that.
The point of RAID is so everyone can keep working if there is a hardware failure. It’s there to prevent downtime.
It’s 36 TB drives. Most people are planning on keeping anything legal or self-produced there. It’s going to be pirated media and idk about you but I’m not uploading that to any cloud provider lmao
These are enterprise drives, they aren’t going to contain anything pirated. They are probably going to one of those cloud providers you don’t want to upload your data to.
I can easily buy enterprise drives for home use. What are you on about?
You couldn’t afford this drive unless you are enterprise so there’s nothing to worry about. They don’t sell them by the 1. You have to buy enough for a rack at once.
Ignoring the Seagate part, which makes sense… Is there a reason with 36TB?
I recall IT people losing their minds when we hit the 1TB, when the average hard drive was like 80GB.
So this growth seems right.
It’s raid rebuild times.
The bigger the drive, the longer the time.
The longer the time, the more likely the rebuild will fail.
That said, modern raid is much more robust against this kind of fault, but still: if you have one parity drive, one dead drive, and a raid rebuild, if you lose another drive you’re fucked.
Just rebuilt onto Ceph and it’s a game changer. Drive fails? Who cares, replace it with a bigger drive and go about your day. If total drive count is large enough, and depends if using EC or replication, it could mean pulling data from tons of drives instead of a handful.
It’s still the same issue, RAID or Ceph. If a physical drive can only write 100 MB/s, a 36TB drive will take 360,000 seconds (6000 minutes or 100 hours) to write. During the 100-hour window, you’ll be down a drive, and be vulnerable to a second failure. Both RAID and Ceph can be configured for more redundancy at the cost of less storage capacity, but even Ceph fails (down to read only mode, or data loss) if too many physical drives fail.