- #1
graphic7
Gold Member
- 451
- 2
Finally, this will be the first hardware RAID controller that I will own and will use in my own workstation. Software RAID does get tiring after awhile, especially if you have an ungodly amount of small drives in a system (6 * 9gb U160 SCSI drives - reliable, fast, however, small).
With Linux, SuSE specifically, creating a software RAID array is all point in click. You have two ways of doing it, via the LVM, or via classic MD-style setup. The bad thing about using an LVM is that you can't use it as your root (/) drive. For some reason, every distribution I play around with (including SuSE, Debian where you build the kernel yourself and compile all the LVM junk, and Fedora), I can never get it to use an LVM as the root device. With MD, this all works fine, and I can go about my business. While MD will create a RAID array for you, it's not that I would depend on; performance isn't the best, either.
Setting up RAID on *BSD's isn't much better. A software RAID array cannot be created doing the install - all RAID creation must be done after the install. This means stripping the root device is forbidden and mirroring the root device isn't the easiest thing to do, either. There are ways, though. On *BSD's (this includes FreeBSD, NetBSD, OpenBSD), the RAID creation is done by a tool called `vinum.' Vinum is much like MD, although a little more `professional-looking,' and it emulates Veritas (the interface, at least), quite well. With vinum, it's a bit more tedious to create a complex RAID array. The config file has it's own syntax, and is by no means understandable just reading the man page. If you have a complex RAID array (100 drives or more), good luck with Vinum. The performance and reliability is about the same with LVM and MD.
With Solaris you have a choice between Disksolstice and Veritas (which you can use on Linux and other commerical Unices, if you like). Disksolstice is a step up from Vinum and MD, both. It's slightly more complex than vinum, because it is software RAID that's geared towards complex setups. You can mirror the root drive pretty easily (much easier than mirroring the root drive with vinum), however, stripping the root device is out of the question - the same goes for Veritas. I've used Disksolstice and Veritas both for quite some time - both are realiable and about as fast as software RAID gets.
AIX is an interesting OS. It's not quite UNIX - more of it's "own" thing, but it's still a UNIX. The reason I say it's an interesting OS is not just because of all the "weird" utilities it has like smit, but primarily because of the software RAID support it has. If I were going to use software RAID (and happened to have a pSeries or RS/6000 under my desk - which I don't), this would be what I would use, without a doubt. The AIX LVM is integrated into smit making almost any LVM operation (migration, creation, deletion, status queries) quick and easy. Creation and deletion are what every RAID or LVM tool does. Migration is where it gets interesting. With AIX, let's say we just bought a bunch of new disks. We would like to put those disks to use. What better way could we put them to use other than actually moving a filesystem that everyone uses to them? /usr sounds like a good canidate - let's use it. Also, what will we do with all those old drives? Let's "absorb" them into the newly created volume. With smit, we just migrate it to the RAID array - it's literally that simple. Just browse through a few menus and choose the volumes that are to be moved and the ones that are to be moved. Optionally, we could just have expaned the volume which is by no means something that makes AIX special - Veritas, Disksolstice, and the Linux LVM all do this. I just wanted to point out to you how special the AIX LVM is. After the procedure, you might be stunned because 1) you didn't have to reboot 2) smit didn't want you to go into single user mode 3) no users or daemon processes were interrupted. In other words, if I were logged in and using vi and perhaps doing some compilation with compilers that were located in the /usr filesystem, I would never notice that the whole volume that I'm using is being moved. I could be compiling a large project which would be calling 'cc' every so often, and while the logical volume is being moved I would never notice. Unfortunately, I don't have an RS/6000 or pSeries, so this is out of the question, therefore, I turned to cheap hardware RAID solutions.
With hardware RAID, everything RAID operation is done by the RAID bios (at bootup) or through some 3rd party utility through the OS. The arrays are created, and from that point on the system sees those RAID arrays as actual drives. If I stripe all of my drives and decide to install Windows or Linux, they'll see the array as a single drive. No more software RAID hassles.
So in about an hour or so I should have a new (refurbished) Dell Perc3/QC Ultra2 /w 128mb of cache, 4 channel RAID controller at my door. It supports RAID 0, 1, 5, 10, and 50. RAID modes 10 and 50 are modifications of RAID 1 and 5.
What I was thinking of doing was setting up a stripped array. I have a 73gb Ultra2 drive coming in also today, meaning that I'll have to move one of my 9gb drives (the drive cage doesn't have room) Should be interesting.
With Linux, SuSE specifically, creating a software RAID array is all point in click. You have two ways of doing it, via the LVM, or via classic MD-style setup. The bad thing about using an LVM is that you can't use it as your root (/) drive. For some reason, every distribution I play around with (including SuSE, Debian where you build the kernel yourself and compile all the LVM junk, and Fedora), I can never get it to use an LVM as the root device. With MD, this all works fine, and I can go about my business. While MD will create a RAID array for you, it's not that I would depend on; performance isn't the best, either.
Setting up RAID on *BSD's isn't much better. A software RAID array cannot be created doing the install - all RAID creation must be done after the install. This means stripping the root device is forbidden and mirroring the root device isn't the easiest thing to do, either. There are ways, though. On *BSD's (this includes FreeBSD, NetBSD, OpenBSD), the RAID creation is done by a tool called `vinum.' Vinum is much like MD, although a little more `professional-looking,' and it emulates Veritas (the interface, at least), quite well. With vinum, it's a bit more tedious to create a complex RAID array. The config file has it's own syntax, and is by no means understandable just reading the man page. If you have a complex RAID array (100 drives or more), good luck with Vinum. The performance and reliability is about the same with LVM and MD.
With Solaris you have a choice between Disksolstice and Veritas (which you can use on Linux and other commerical Unices, if you like). Disksolstice is a step up from Vinum and MD, both. It's slightly more complex than vinum, because it is software RAID that's geared towards complex setups. You can mirror the root drive pretty easily (much easier than mirroring the root drive with vinum), however, stripping the root device is out of the question - the same goes for Veritas. I've used Disksolstice and Veritas both for quite some time - both are realiable and about as fast as software RAID gets.
AIX is an interesting OS. It's not quite UNIX - more of it's "own" thing, but it's still a UNIX. The reason I say it's an interesting OS is not just because of all the "weird" utilities it has like smit, but primarily because of the software RAID support it has. If I were going to use software RAID (and happened to have a pSeries or RS/6000 under my desk - which I don't), this would be what I would use, without a doubt. The AIX LVM is integrated into smit making almost any LVM operation (migration, creation, deletion, status queries) quick and easy. Creation and deletion are what every RAID or LVM tool does. Migration is where it gets interesting. With AIX, let's say we just bought a bunch of new disks. We would like to put those disks to use. What better way could we put them to use other than actually moving a filesystem that everyone uses to them? /usr sounds like a good canidate - let's use it. Also, what will we do with all those old drives? Let's "absorb" them into the newly created volume. With smit, we just migrate it to the RAID array - it's literally that simple. Just browse through a few menus and choose the volumes that are to be moved and the ones that are to be moved. Optionally, we could just have expaned the volume which is by no means something that makes AIX special - Veritas, Disksolstice, and the Linux LVM all do this. I just wanted to point out to you how special the AIX LVM is. After the procedure, you might be stunned because 1) you didn't have to reboot 2) smit didn't want you to go into single user mode 3) no users or daemon processes were interrupted. In other words, if I were logged in and using vi and perhaps doing some compilation with compilers that were located in the /usr filesystem, I would never notice that the whole volume that I'm using is being moved. I could be compiling a large project which would be calling 'cc' every so often, and while the logical volume is being moved I would never notice. Unfortunately, I don't have an RS/6000 or pSeries, so this is out of the question, therefore, I turned to cheap hardware RAID solutions.
With hardware RAID, everything RAID operation is done by the RAID bios (at bootup) or through some 3rd party utility through the OS. The arrays are created, and from that point on the system sees those RAID arrays as actual drives. If I stripe all of my drives and decide to install Windows or Linux, they'll see the array as a single drive. No more software RAID hassles.
So in about an hour or so I should have a new (refurbished) Dell Perc3/QC Ultra2 /w 128mb of cache, 4 channel RAID controller at my door. It supports RAID 0, 1, 5, 10, and 50. RAID modes 10 and 50 are modifications of RAID 1 and 5.
What I was thinking of doing was setting up a stripped array. I have a 73gb Ultra2 drive coming in also today, meaning that I'll have to move one of my 9gb drives (the drive cage doesn't have room) Should be interesting.
Last edited: