Sun StorageTek StorEdge 6140 FC/SATA array

The next victim is a Sun StorEdge 6140 array with 6xFC and 10xSATA disks.

Test equipment

  • Sun X4200 M2 dual opteron server with 20 GB RAM, 10 GB used for ARC cache
  • 2 Sun (Q-Logic) FC cards installed, 4 Gbit/sec capable, connected through a SAN switch fabric to the raid device
  • SAN switch fabric without other traffic (isolated)
  • Solaris 10 x86 with Kernel 127112-11 installled (all relevant ZFS patches as of 2008/05/12) (no kernel updates so results are comparable)
  • Sun SE6140 array with 10x 750GB SATA and 6x 140 GB FC
  • Blocksize used in the disks arrays: 128KB
  • ZFS recordsize: 128KB

This was a real world test, so caches on the system (ARC) and on the RAID device are ON. I did explicitly not want a lab benchmark with all caches turned off.

Test configurations

The following configurations were tested:

  • 1x3r5: RAID5 with 3 disks
  • 2x3r5: 2 RAID5 sets with 3 disks each, striped via ZFS
  • 1x6r5: RAID5 with 6 disks
Every setup was done with FC-disks and with SATA-disks.
In every graphic the results from my first candidate, the Infortrend OXYGENRAID device results are shown for comparable configurations.

All devices have been accessed via Sun's scsi_vhci (MPxIO) driver, the vhci_stat-Driver automatically recognizes this array as not symmetric, so only one path is used.

Test method

Sun's filebench tool was used to generate the results.

The following filebench personalities and parameters were used:

  • multistreamread
    • $filesize=10g
  • multistreamwrite
    • $filesize=10g
  • varmail
    • $filesize=10000
    • $nfiles=100000
    • $nthreads=60
  • oltp
    • $filesize=4g

All tests were run for 300 seconds ("run 300").

ZFS was used as filesystem in all scenarios.



1. multistreamread


sun6140.msr.gifIn this test, pure bulk sequential data transfer is measured. As expected, the FC disks are much faster then their SATA counterparts. This is not a big surprise, as the SATA disks are spinning with 7500rpm and the FC disks use 15000rpm.
The 2x3r5 configuration showed such a high performance (with FC disks) that the 4 gbit/sec-Fiberchannel-Link was saturated, giving 385 Megabytes/sec.
The rather cheap Infortrend device behaves very badly compared to the Sun product.

As you would expect, the latency is reciprocal to the bandwidth:

sun6140.msr2.gif

The OXYGENRAID devices takes up to three times longer to accomplish a read operation!


2. multistreamwrite

As above, this is a sequential bulk transfer test. Writes do take more time than reads, that's the expected behaviour on RAID sets because more data has to be computed and written to the disks:

sun6140.msw.gif




3. varmail

The varmail scenario is a test with many small files in one directory and many create and delete operations.

sun6140.varmail.gif

This is quite fuzzy. In this scenario, using the expensive and fast FC disks does not result in big performance gains - the big performance loss on the 6-disk-FC-configuration is quite impressive. Perhaps a bug in the RAID controller firmware? All tests were done three times and the mean was taken as result - in this case all three results have been inferior.

And: Even the rather cheap Infortrend device can compete in this field.


4. oltp

The last test is the oltp scenario, simulating a transactional based database access pattern using a log file and big data files. Updates are done by small chunks and extensive locking.

sun6140.oltp3.gif

The FC disks show a performance plus compared to the SATA disks and the Infortrend devices. FC disks are made for oltp data access.




0 TrackBacks

Listed below are links to blogs that reference this entry: Sun StorageTek StorEdge 6140 FC/SATA array.

TrackBack URL for this entry: http://southbrain.com/mt/mt-tb.cgi/61

December 2015

Sun Mon Tue Wed Thu Fri Sat
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31    

About

This blog is owned by:

Pascal Gienger
J├Ągerstrasse 77
8406 Winterthur
Switzerland


Google+: Profile
YouTube Channel: pascalgienger