LSI ProFibre 4000R: anno 2002.
I just got an old LSI ProFibre fiberchannel RAID device, ready for trash. I was interested if this old device could still compete to a SATA RAID device of 2007.
If you look at the numbers, please bear in your mind that this device is 6 years old. Many improvements in hard disks and controllers have been made since then.
As with the SATA device, the filesystem used is ZFS.So let's begin.
- Sun X4200 M2 dual opteron server with 20 GB RAM, 10 GB used for ARC cache
- 2 Sun (Q-Logic) FC cards installed, 4 Gbit/sec capable, connected through a SAN switch fabric to the raid device
- SAN switch fabric without other traffic (isolated)
- Solaris 10 x86 with Kernel 127112-11 installled (all relevant ZFS patches as of 2008/05/12).
- LSI ProFibre 4000R with 10x 146 GB Fiberchannel disks (they are labeled with "Seagate")
This was a real world test, so caches on the system (ARC) and on the RAID device are ON. I did explicitly not want a lab benchmark with all caches turned off.
Test configurationsThe following configurations were tested:
- 1x1: single disk, configured as JBOD/non-RAID, for reference
- 1x2r1: RAID1 (mirror) with 2 disks
- 1x2m: ZFS mirror with 2 disks as JBOD/NRAID
- 1x3r5: RAID5 with 3 disks
- 2x3r5: 2 RAID5 sets with 3 disks each, striped via ZFS
- 3x3r5: 3 RAID5 sets with 3 disks each, striped via ZFS
- 1x6r5: RAID5 with 6 disks
- 1x6z1: ZFS raidz1 with 6 disks (JBOD/NRAID)
- 2x6r5: 2 RAID5 sets with 6 disks each, striped via ZFS
- 1x10r5: RAID5 with 10 disks
- 1x10z1: ZFS raidz1 with 10 disks (JBOD/NRAID)
- 5x2r1: 5 mirrors striped by ZFS
- 5x2m: 5 striped ZFS mirrors
All devices have been accessed via Sun's scsi_vhci (MPxIO) driver, logical blocksize 20 (1 MB).
Sun's filebench tool was used to generate the results.
The following filebench personalities and parameters were used:
All tests were run for 300 seconds ("run 300").
ZFS was used as filesystem in all scenarios.
Impressing fact: In terms of i/o you can get more bandwidth with that old LSI FC RAID than with the newer SATA ones (which had a limit at approx 95 MB/sec).
Write speeds differ from read patterns. Writes on zpools with many components will get slower. Astonishing was the fact that ZFS mirrors do compare very bad when used in combination with 5 of them (5x2m).
The varmail scenario is a heavy random i/o scenario with many files in one directory with concurrent access of many threads. If you don't want to use a RAID system for single user video streaming, read on.
The graphic above shows the number of total operations per second measured. If you like to have real read/write operations, this is the graph for you:
Main results (the same as with the Infortrend SATA device):
- ZFS' mirror does not enhance performance.
- More spindles mean more operations per time unit.
- ZFS' RAID implementation in conjunction with Infortrend's JBOD setting is not recommendable for this scenario.
- ZFS concatenation does increase performance - 3x3r5 performs better than 1x6r5.
Now, a more complex scenario. "oltp" simulates a transactional database (like Oracle, Postgres, ...) with (very) small database updates, a common shared memory mapped region and a transaction log file. 230 threads are running in parallel.
- raidz1 (ZFS raid) does not scale at all with the number of spindles.
- ZFS' concatenation/stripe algorithm performs very well with this kind of workload.
- These Seagate SATA disks seem to be able to handle 100 oltp ops per second. FC and SAS disks should handle more than 100.
Conclusion - Comparison old FC - new SATA
Foreword: Only comparable configurations have been compared (same disk configuration) - except on the maximum graph: here I plotted the best value of all tested configurations. The 5x2-Mirror configurations of the LSI RAID have not been taken into account, as I did not test the same configuration on the Infortrend side. But more than 130 Megabytes/sec are really good.
With regard to I/O bandwidth, the LSI device is faster. Read and Write:
The varmail benchmark favorises the new Infortrend SATA device:
And with oltp, the old RAID controller is not really optimized for that kind of workload:
The result is somewhat amazing, keep in mind that the LSI device is 6 years old. So 6 year-old-Fiberchannel-disks do have nearly the same performance as new SATA ones. The differences in oltp workload result IMHO in the optimization of the integrated RAID controlles.