Performance Testing - drive and network speed

book

Article ID: 48749

calendar_today

Updated On:

Products

APPLOGIC

Issue/Introduction

Description:

This article is to be used to test disk / network i/o speeds at each level. Most of the tests use dd to create 4Gig files, the results are printed after the command completes. During the disk tests please ensure the size of the files you are creating are at least double the amount of memory so that you areactually testing the disks and not just the memory. Every test should be run multiple times to get an average.

Solution:

Sometimes it is determined that AppLogic or the appliances have slow disk accesses and performance. The following tips are aimed at providing some troubleshooting tips to find out where the bottleneck may be

During install, the system will test network speed with the following command:

  • ping -q -c 8 -s 1500 -i 0.1 <node ip> | awk '/^rtt/ { print $4 }'

  • Ouput syntax is min/avg/max/mdev

  • Anything higher then 300us avg will fail the test.

  • A typical (good) gigabit network will show around 160us.

  • Install tests from srv1 -> srv2 or BFC -> srv1 and vice versa.

Appliance volume I/O:

  • Test the disk i/o speed of the appliance volume within the running appliance.

  • Write: dd if=/dev/zero of=/test bs=4096 count=1000000

  • Read: dd if=/test of=/dev/null bs=4096 count=1000000

Dom0 physical drives I/O:

  • Tests the individual physical drives to see what the hardware is capable of.

  • hdparm -tT /dev/sdX

Dom0 LVM VolGroup I/O:

  • Tests the speed of the LVM vol group were the volume streams are stored.

  • Write: dd if=/dev/zero of=/var/applogic/test bs=4096 count=1000000

  • Read: dd if=/var/applogic/test of=/dev/null bs=4096 count=1000000

  • Note: oflag=direct can be added to the dd command to bypass any buffering for more accuracy.

Dom0 A -> Dom0 B (and B -> A) Network I/O:

  • On the destination dom0 we will setup netcat to listen on port 1234 (arbitrary unprivileged port) and pipe the data to /dev/null.

  • On Dom0 B: nc -l -p 1234 | dd of=/dev/null Note: The version of netcat on the dom0 is older so we need the -p, newer versions would not need this.

  • This command will stay running with no output.

  • Now, on the source dom0 we can initiate the dd write command and pipe it to netcat to send to the destination dom0.

  • On Dom0 A: dd if=/dev/zero bs=4k count=1000000 | nc 192.168.0.12 1234

  • Replace 192.168.0.12 with the correct back-end IP for the destination dom0.

  • When the command finishes it will output the speed and time it took to transfer the file over the network. Use Ctrl-C to exit (this will also exit the other nc command). Note: The time/speed output displayed on the destination dom0 can be ignored.

  • You should run the test several times to get a average reading and from both directions (A->B and B->A).

Dom0 A -> Dom0 B (and B -> A) Network I/O (nuttcp alternative):

  • nuttcp gives a much more accurate test of the network speed eliminating almost all overhead that other utilities have.

  • nuttcp is not installed by default so we will have to download and install this on both nodes first.

  • http://www.lcp.nrl.navy.mil/nuttcp/stable/rpm/nuttcp-5.3.1-1.i386.rpm

  • To install, just run rpm -i

  • On the Dom0 B run: nuttcp -S Note: This command will exit with no output but will continue running as a background process.

  • On the Dom0 A you can test the network speed with (for 1000mb): nuttcp -t -4 -i1 -R1000M <remote host ip>

  • This will test (10 times) the pure max network bandwidth available and give a avg at the end.

  • On 1G you will see close to 980 mbps, with 10G (-R10000M) you will see about 5000 - 60000mbps.

Dom0 single vol stream mounted directly (loop)

  • Should not be done, since it can corrupt md header in volume.

  • mount -t ext3 -o loop,rw /var/applogic/volumes/vols/xxxxxx /mnt/mountpoint

  • Use only as last resort for data recovery.

Environment

Release:
Component: APPLGC