Nexenta Performance Testing No-SSD/SSD

In this post I will be testing Disk IO without SSD for L2ARC and ZIL/SLOG and then again with SSD for L2ARC and ZIL/SLOG. I will be using the VMware I/O analyzer from here. The SSD drives I will be using are the Intel 520 120GB drives. We will see what the different IO scenarios will look like for NFS and iSCSI. I will also be changing the disk sizes for the second virtual disk on the analyzer from the default of 100MB to 20GB. I am doing this because my NAS/SAN has 16GB of memory installed and I do not want it to cache the data giving false output. I will also be running the test duration for the default 120 secs.

Check out this post and then this one for information on my Nexenta configurations.

First download the I/O analyzer appliance zip file and then extract it. Then import the ova file using the vCenter client.

Below are the different VMware IO Analyzer tests used. I tried to use what I would consider real world scenarios to get a real sense of performance vs. just seeing max performance. If you would like to see a test of a different scenario feel free to leave a comment on this post.

OLTP 4K (4K 70% Read 100% Random) – NFS

NFS

10-27-51

NFS – SSD

10-25-35

NFS – (Sync=disabled)

10-34-29

 NFS – SSD – (Sync=disabled)

10-38-26

OLTP 4K (4K 70% Read 100% Random) – iSCSI

iSCSI

10-42-59

iSCSI – SSD

10-44-17

Results from above clearly show that iSCSI using SSD for ZIL/SLOG and L2ARC outperforms the other scenarios including NFS.

SQL – 64k (64k 66% Read 100% Random) – NFS

NFS

10-53-30

NFS – SSD

10-54-38

NFS – (Sync=disabled)

10-56-12

NFS – SSD – (Sync=disabled)

10-58-08

SQL – 64k (64k 66% Read 100% Random) – iSCSI

iSCSI

11-00-38

iSCSI – SSD

11-01-35

Results from above clearly show that iSCSI using SSD for ZIL/SLOG and L2ARC outperforms the other scenarios including NFS.

Exchange 2007 (8k 55% Read 80% Random) – NFS

NFS

11-06-05

NFS – SSD

11-06-50

NFS – (Sync=disabled)

11-08-04

NFS – SSD – (Sync=disabled)

11-09-44

Exchange 2007 (8k 55% Read 80% Random) – iSCSI

iSCSI

11-14-24

iSCSI – SSD

11-15-44

Results from above clearly show that iSCSI using SSD for ZIL/SLOG and L2ARC outperforms the other scenarios including NFS.

Webserver (8k 95% Read 75% Random) – NFS

NFS

11-27-28

NFS – SSD

11-28-32

NFS – (Sync=disabled)

11-31-00

NFS – SSD – (Sync=disabled)

11-32-08

Webserver (8k 95% Read 75% Random) – iSCSI

iSCSI

11-34-37

iSCSI – SSD

11-35-40

Results from above clearly show that iSCSI using SSD for ZIL/SLOG and L2ARC outperforms the other scenarios including NFS.

Notes

  • ZIL is not touched using iSCSI unless you disable writeback cache for the zvol.
  • For NFS it appears that throughput and IOPS increased marginally, but the latency decreased significantly when using SSD for ZIL/SLOG and L2ARC. But iSCSI with SSD clearly outperformed NFS in all tests.
  • To check out the usage of L2ARC go here and get the arcstat.pl script and run it with the following commands.
  • ./arcstat.pl  -f read,hits,miss,hit%,l2read,l2hits,l2miss,l2hit%,arcsz,l2size

That is all I have for this testing scenario and again I welcome comments and would be glad to perform additional tests if asked.

Enjoy!

**Update** You can see the results of a few of these same tests on my new server build here.

11 thoughts on “Nexenta Performance Testing No-SSD/SSD

  1. Some performance tuneables (/etc/system) that might help this specific setup get even better performance. Note: this specific system only, recommendation for 16G Ram homebrew w/ SATA drives and Intel 520 for L2ARC.

    set zfs:zfs_txg_timeout = 10

    set zfs:zfs_txg_synctime_ms = 5000

    set zfs:zfs_vdev_max_pending = 4

    set zfs:l2arc_write_max = 0x2000000

    set zfs:l2arc_norw = 0

    • Here were the settings I had during the tests I ran.

      zfs_txg_timeout = 0x5

      mdb: variable zfs_txg_synctime not found: unknown symbol name

      zfs_vdev_max_pending = 0x1

      zfs_vdev_min_pending = 0x4

      These values are currently in the /etc/system

      set nfs:nfs_allow_preepoch_time = 1

      set zfs:zfs_vdev_max_pending = 1

      • Ahh, I was assuming you had the default of 10 for zfs_vdev_max_pending – you already had it set to 1. In that case feel free to ignore my advice to set it to 4; I just didn't want you to jump from 10 to 1 without further explanation.

        10 is not optimal for SATA disks (gets you really high latency); 4 is a decent middle-ground, and may help utilize the L2ARC better than a lower value. 1 means we don't really queue, so latency is good and consistent from the drive, yet we can't do much in parallel – which means that the l2arc_norw tweak will be ineffective, as it exploits the queueing abilities of recent SSDs.

        Devil is in the details 🙂 Feel free to ask more.

          • I believe that iSCSI is outperforming NFS in my testing due to the fact that I am using MPIO from my ESXi 5.1 hosts. Read my other posts for more details on that. I myself was even shocked at the difference in performance. I was thinking of converting back to NFS prior to this testing that I did.

          • Hello i just re-check your result and i can see something realyy weird,

            you write this: Results from above clearly show that iSCSI using SSD for ZIL/SLOG and L2ARC outperforms the other scenarios including NFS.

            The issue here is with bloc volume (call zvol) you never use the ZIL.

            The ZIL is only use with sync writes comming from a share and when the block size is =< 32Kb if you are not in that case the writes goes directly to the DMU and bypass the ZIL

          • Very good points. I plan on doing some additional testing as soon as I get my new NexentaStor server built. So be on the lookout. I did add in the notes section that ZIL is not touched using iSCSI unless you disable writeback cache for the zvol.

        • Yep. I spent a good bit of time years ago when using FreeNAS and all of its poor performance with ZFS at that time and then moved over to Nexenta CE and did a lot of tweaking and testing then so some of the settings I had tweaked already, but not much. One other thing I did was disable Nagle. Thanks for the feedback though so for sure.

  2. Pingback: New Nexenta Server Test - iSCSI - SSD | Everything Should Be Virtual

  3. Pingback: Latest Nexenta Testing Results | Everything Should Be Virtual

  4. Pingback: Labworks 1:4-7 – The Last Word in ZFS Labworks | agnostic computing.com

Leave a Reply

Your email address will not be published. Required fields are marked *

*