Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Panel

userathostname ~$ iperf -v
iperf version 2.0.2 (03 May 2005) pthreads

Because im in many cases the low network performance is caused by high CPU load, you should measure CPU usage at both link ends during every test round. In this HOTWO HOWTO we use an open-source vmstat LINK-! tool which you probably have already installed on your machines.

...

In order to enable MTU 9000 on the machine network interfaces you may use ifconfig command.

Panel

rootathostname test001:root@hostname ~$ ifconfig eth1 mtu 9000

...

If jumbo frame are working properly, you should be able to ping one host from another using large MTU:

Panel

rootathostname test001:root@hostname ~$ ping 10.0.0.1 -s 8960

In the example above, we use 8960 instead of 9000 because ping tool option -s needs frame size minus frame header which lenght is equal to 40 bytes. If you cannot use jumbo frames set the mtu to default value 1500.

To tune your link you should measure the average Round Trip Time (RTT) between machines. RTT can be obtained by multiplying the value returned by a ping command by 2. When you have RTT measured, you can set TCO read and write buffers sizes. There are three values you can set: minimum, initial and maximum buffer size. The theoretical value (in bytes) for initial buffer size is BPS / 8 * RTT, where BPS is the link bandwidth in bits/second. Example commands that set these values for the whole operating system are:

Panel

rootathostname test001:root@hostname ~# sysctl -w net.ipv4.tcp_rmem="4096 500000 1000000"
rootathostname test001:root@hostname ~# sysctl -w net.ipv4.tcp_wmem="4096 500000 1000000"

...

You can also experiment with maximum socket buffer sizes:

Panel

rootathostname test001:root@hostname ~# sysctl -w net.core.rmem_max=1000000
rootathostname test001:root@hostname ~# sysctl -w net.core.wmem_max=1000000

Another options that should boost performance are:

Panel

rootathostname test001:root@hostname ~# sysctl -w net.ipv4.tcp_no_metrics_save=1
rootathostname test001:root@hostname ~# sysctl -w net.ipv4.tcp_moderate_rcvbuf=1
rootathostname test001:root@hostname ~# sysctl -w net.ipv4.tcp_window_scaling=1
rootathostname test001:root@hostname ~# sysctl -w net.ipv4.tcp_moderate_rcvbuf=1
rootathostname test001:root@hostname ~# sysctl -w net.ipv4.tcp_sack=1
rootathostname test001:root@hostname ~# sysctl -w net.ipv4.tcp_fack=1
rootathostname test001:root@hostname ~# sysctl -w net.ipv4.tcp_dsack=1

...

To perform the test, we should run iperf in server mode in one host:

Panel

rootathostname test001:root@hostname ~# iperf -s -M $mss

On the other host we should run command like this:

Panel

rootathostname test001:root@hostname ~# iperf -c $serwer -M $mss -P $threads -w ${window} -i $interval -t $test_time

...

To generate key-pair we use a following command:

Panel

rootathostname test001:root@hostname ~# ssh-keygen -t dsa

Then we copy the public key to the remote server and add it to authorized keys file:

Panel

rootathostname test001:root@hostname ~# cat identity.pub >> /home/sarevok/.ssh/authorized_keys

...

RAID Level

Capacity

Storage efficiency

Fault tolerance

Sequential read perf

Sequential write perf

RAID0

S∗N S*N

100%

0

5

5

RAID1

S∗N S*N/2

50%

4

2

2

RAID5

S * (N - 1)

(N - 1)/N

3

4

3

RAID6

S * (N - 2)

(N - 1)/N

4,5

4

2,5

RAID10

S * N/2

50%

4

3

4

RAID50

S * N0 * (N5 - 1) (N5 - 1)/N5

(N 5 - 1)/N

3,5

3

3,5

RAID benchmarking assumptions

...

When you create a RAID structure you should be able to see some RAID details similar to the informations shown below:

Panel

rootatsarevok test001:root@sarevok bin# mdadm --detail /dev/md1
/dev/md1:
Version : 00.90.03
Creation Time : Mon Apr 6 17:41:43 2009
Raid Level : raid0
Array Size : 6837337472 (6520.59 GiB 7001.43 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 3
Persistence : Superblock is persistent

...

for writing performance:

Panel

rootatsarevok test001:root@sarevok ~# time dd if=/dev/zero of=/mnt/md1/test_file.txt bs=1024M count=32

and for reading performance:

Panel

rootatsarevok test001:root@sarevok ~# time dd if=/mnt/md1/test_file.txt of=/dev/zero bs=1024M count=32

...

Panel

blocksize=$min_blocksize
while [ _blocksize -le $max_blocksize ]; do
queuedepth=$min_queuedepth
while [ _queuedepth -le $max_queuedepth ]; do

Panel

vmstat 1 > $dst_path/vmstat-$blocksize-$queuedepth-$curr_date

/root/iozone T -t $queuedepth -r ${blocksize}k -s ${file_size}G -i 0 -i 1 -c -e > $dst_path/iozone$blocksize-$queuedepth-$curr_date

...

To start test script that can ignore hangup signals, you can use nohup command:

[ rootatsarevok ]test001: root@sarevok$ nohup script.sh &

This command keeps the test running when we close the session with server. To present obtained results we use the open-source gnuplot program.

...