Sun StorageTek StorEdge 6140 FC/SATA array
The next victim is a Sun StorEdge 6140 array with 6xFC and 10xSATA disks.
- Sun X4200 M2 dual opteron server with 20 GB RAM, 10 GB used for ARC cache
- 2 Sun (Q-Logic) FC cards installed, 4 Gbit/sec capable, connected through a SAN switch fabric to the raid device
- SAN switch fabric without other traffic (isolated)
- Solaris 10 x86 with Kernel 127112-11 installled (all relevant ZFS patches as of 2008/05/12) (no kernel updates so results are comparable)
- Sun SE6140 array with 10x 750GB SATA and 6x 140 GB FC
- Blocksize used in the disks arrays: 128KB
- ZFS recordsize: 128KB
This was a real world test, so caches on the system (ARC) and on the RAID device are ON. I did explicitly not want a lab benchmark with all caches turned off.
Test configurationsThe following configurations were tested:
- 1x3r5: RAID5 with 3 disks
- 2x3r5: 2 RAID5 sets with 3 disks each, striped via ZFS
- 1x6r5: RAID5 with 6 disks
In every graphic the results from my first candidate, the Infortrend OXYGENRAID device results are shown for comparable configurations.
All devices have been accessed via Sun's scsi_vhci (MPxIO) driver, the vhci_stat-Driver automatically recognizes this array as not symmetric, so only one path is used.
Sun's filebench tool was used to generate the results.
The following filebench personalities and parameters were used:
All tests were run for 300 seconds ("run 300").
ZFS was used as filesystem in all scenarios.
In this test, pure bulk sequential data transfer is measured. As expected, the FC disks are much faster then their SATA counterparts. This is not a big surprise, as the SATA disks are spinning with 7500rpm and the FC disks use 15000rpm.
The 2x3r5 configuration showed such a high performance (with FC disks) that the 4 gbit/sec-Fiberchannel-Link was saturated, giving 385 Megabytes/sec.
The rather cheap Infortrend device behaves very badly compared to the Sun product.
As you would expect, the latency is reciprocal to the bandwidth:
The OXYGENRAID devices takes up to three times longer to accomplish a read operation!
2. multistreamwriteAs above, this is a sequential bulk transfer test. Writes do take more time than reads, that's the expected behaviour on RAID sets because more data has to be computed and written to the disks:
The varmail scenario is a test with many small files in one directory and many create and delete operations.
This is quite fuzzy. In this scenario, using the expensive and fast FC disks does not result in big performance gains - the big performance loss on the 6-disk-FC-configuration is quite impressive. Perhaps a bug in the RAID controller firmware? All tests were done three times and the mean was taken as result - in this case all three results have been inferior.
And: Even the rather cheap Infortrend device can compete in this field.
4. oltpThe last test is the oltp scenario, simulating a transactional based database access pattern using a log file and big data files. Updates are done by small chunks and extensive locking.
The FC disks show a performance plus compared to the SATA disks and the Infortrend devices. FC disks are made for oltp data access.
Listed below are links to blogs that reference this entry: Sun StorageTek StorEdge 6140 FC/SATA array.
TrackBack URL for this entry: