With disk drives continuing to increase in size, providing the resiliency to protect critical data becomes more challenging. While disks have gotten larger, their overall reliability has remained about the same. Larger disk size means that the time needed to reconstruct a failed disk using Redundant Array of Independent Disks (RAID) parity information has become significantly longer, raising the possibility of a second disk failure or other error occurring before reconstruction can complete. The likelihood of bit and block errors also increases proportionally with the increased media size, making the chances of this type of event during reconstruction a distinct possibility and increasing the chances of a double failure that could disrupt business and cause data loss in single parity RAID implementations.
NetApp pioneered the development of its unique dual-parity RAID implementation, RAID-DP®, to address this resiliency problem. While other dual-parity RAID 6 implementations exist, RAID-DP is the only one that provides protection against double disk failures in the same RAID group with no significant decreases in performance.
An aggregate is made up of RAID Groups. You cannot split a RAID Group between aggregates, but you can have multiple RAID Groups that make up a single aggregate. Always use RAID-DP, which is an implementation of RAID-6 that uses striped data drives and 2 disks reserved solely for parity and diagonal parity. This allows you to lose two disks per RAID Group without losing data. As the ratio of data disks to parity disks goes up, your space efficiency goes up, but also the risk of losing 3 disks in a RG increases. There are also performance implications for a high data disk to parity disk ratio.
First steps in creating an Aggregate is to add it from the web GUI in the FilerView section:
So then we start the wizard:
Next step is to give the aggregate a name and always check double parity. This is a cool redundant feature from NetApp that gives you security in a Raid Group:
Raid Group sizes should not exceed 16 disks. An aggregate can contain multiple RAID groups. If I had created an aggregate with 24 disks, then Data ONTAP would have created two RAID groups. The first RAID group would be fully populated with 16 disks (14 data disks and two parity disks) and the second RAID group would have contained 8 disks (6 data disks and two parity disks). This is a perfectly normal situation.
I have skipped two steps: disk selection and disk type, that should be left to default. The next thing to be chosen is the size of the disks. I have chosen 1020MB as the filer recommended.
This aggregate is for testing purposes, so in a production environment you should create it as big as the projects need it. I have added for starters only 3 drives, which can be dynamically changed.
The final step is to commit the creation of aggregates. It you do not get any errors here that everything is fine.
And we have a second aggr1 created and ready for use:
To be continued. I hope you got a glimpse of the NetApp architecture. Some next posts will be used to define the logical organization of NetApp. Things like FlexVolumes, NFS and CIFS shares and Qtrees.