Got a call at 4am - RAID Gurus Please Read
javier at advancedmachines.us
Fri Dec 12 06:07:19 UTC 2014
Hey guys, I am running it on freeBSD. (nas4free)
It's my understanding that when a resilver happens in a zpool, only the
data that has actually been written to the disks gets used, not the whole
array like traditional raid5 does, reading even empty blocks. I know I
should be using RAIDZ2 for this size array, but I have daily backups off of
this array and also this is a lab, not a production environment. In a
production environment I would use raidz2 or raidz3. The bottom line is
even just Raidz1 is way better than any RAID5 hardware/software solution I
have come across. 1 disk with ZFS can survive 1/8 of the disk becoming
destroyed apparently. ZFS itself has many protections against data
corruption. Also I have scheduled a zpool scrub to run twice a week (to
detect bitrot before it happens.)
Anyway. I have been using linux raid since it has been available and I ask
myself, why haven't I used ZFS seriously before now.
On Thu, Dec 11, 2014 at 11:06 AM, Bacon Zombie <baconzombie at gmail.com>
> Are you running ZFS and RAIDZ on Linux or BSD?
> On 10 Dec 2014 23:21, "Javier J" <javier at advancedmachines.us> wrote:
>> I'm just going to chime in here since I recently had to deal with bit-rot
>> affecting a 6TB linux raid5 setup using mdadm (6x 1TB disks)
>> We couldn't rebuild because of 5 URE sectors on one of the other disks in
>> the array after a power / ups issue rebooted our storage box.
>> We are now using ZFS RAIDZ and the question I ask myself is, why wasn't I
>> using ZFS years ago?
>> +1 for ZFS and RAIDZ
>> On Wed, Dec 10, 2014 at 8:40 AM, Rob Seastrom <rs at seastrom.com> wrote:
>> > The subject is drifting a bit but I'm going with the flow here:
>> > Seth Mos <seth.mos at dds.nl> writes:
>> > > Raid10 is the only valid raid format these days. With the disks as big
>> > > as they get these days it's possible for silent corruption.
>> > How do you detect it? A man with two watches is never sure what time it
>> > is.
>> > Unless you have a filesystem that detects and corrects silent
>> > corruption, you're still hosed, you just don't know it yet. RAID10
>> > between the disks in and of itself doesn't help.
>> > > And with 4TB+ disks that is a real thing. Raid 6 is ok, if you accept
>> > > rebuilds that take a week, literally. Although the rebuild rate on our
>> > > 11 disk raid 6 SSD array (2TB) is less then a day.
>> > I did a rebuild on a RAIDZ2 vdev recently (made out of 4tb WD reds).
>> > It took nowhere near a day let alone a week. Theoretically takes 8-11
>> > hours if the vdev is completely full, proportionately less if it's
>> > not, and I was at about 2/3 in use.
>> > -r
More information about the NANOG