DX Platform - How to check I/O throughput for NFS without using dbench-fio.sh
search cancel

DX Platform - How to check I/O throughput for NFS without using dbench-fio.sh

book

Article ID: 264108

calendar_today

Updated On:

Products

DX Application Performance Management DX Operational Intelligence CA App Experience Analytics

Issue/Introduction

Due to kubernetes or openshift user restrictions we are unable to use <dx-platform>/tools/dbench-fio.sh  script

Are there any alternative or tool available?

 

Environment

DX Platform 2x

Resolution

FIO command allows to Benchmark Kubernetes persistent disk volumes : Read/write IOPS, bandwidth MB/s and latency.

1) Install fio in all k8s/ose servers

yum -y install fio

 

2) If not already created, create the dxi NFS folder, default "/nfs/ca/dxi"

In NFS Server:

mkdir -p /nfs/ca/dxi

-Open vi /etc/exports
Add your nodes, in this example 001499, 001538, 001539, 001540, 001541

/test 001499.example.com(rw,sync,no_root_squash,no_all_squash)
/test 001538.example.com(rw,sync,no_root_squash,no_all_squash)
/test 001539.example.com(rw,sync,no_root_squash,no_all_squash)
/test 001540.example.com(rw,sync,no_root_squash,no_all_squash)
/test 001541.example.com(rw,sync,no_root_squash,no_all_squash)

-Instructing exportfs to export the share directory to all the servers:

exportfs -a

-Verification:

showmount --exports

Export list for 001499.example.com:
/nfs/ca/dxi 001538.example.com,001539.example.com,001541.example.com,001540.example.com,001499.example.com

 

3) Go to each of servers (not NFS sever) and perform the mount

mkdir test

sudo mount -t nfs <server>:/nfs/ca/dxi /test

Example:

sudo mount -t nfs 001499.example.com:/test /test

NOTE: no output indicates a success

cd /test

 

4) Perform I/O Tests

 

a) Random read performance

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randread


Starting 1 process

Jobs: 1 (f=1): [r(1)][10.6%][r=55.2MiB/s,w=0KiB/s][r=14.1k,w=0 IOPS][eta 00m:59sJobs: 1 (f=1): [r(1)][12.1%][r=60.2MiB/s,w=0KiB/s][r=15.4k,w=0 IOPS][eta 00m:58sJobs: 1 (f=1): 

..

..

test: (groupid=0, jobs=1): err= 0: pid=141590: Fri Apr  2 17:25:53 2021

   read: IOPS=22.8k, BW=89.2MiB/s (93.5MB/s)(4096MiB/45913msec)

   bw (  KiB/s): min=53536, max=113016, per=99.97%, avg=91328.45, stdev=15748.62, samples=91

   iops        : min=13384, max=28254, avg=22832.09, stdev=3937.17, samples=91

  cpu          : usr=5.79%, sys=17.65%, ctx=450255, majf=0, minf=93

  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%

     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%

     issued rwts: total=1048576,0,0,0 short=0,0,0,0 dropped=0,0,0,0

     latency   : target=0, window=0, percentile=100.00%, depth=64

 

Run status group 0 (all jobs):

   READ: bw=89.2MiB/s (93.5MB/s), 89.2MiB/s-89.2MiB/s (93.5MB/s-93.5MB/s), io=4096MiB (4295MB), run=45913-45913msec




b)Random write performance

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randwrite 

 

Starting 1 process

Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=12.9MiB/s][r=0,w=3300 IOPS][eta 00m:00s]

test: (groupid=0, jobs=1): err= 0: pid=177151: Tue Apr  6 08:46:17 2021

  write: IOPS=3451, BW=13.5MiB/s (14.1MB/s)(4096MiB/303800msec)

   bw (  KiB/s): min= 5688, max=16536, per=99.97%, avg=13801.93, stdev=1671.08, samples=607

   iops        : min= 1422, max= 4134, avg=3450.45, stdev=417.77, samples=607

  cpu          : usr=1.20%, sys=3.39%, ctx=317519, majf=0, minf=26

  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%

     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%

     issued rwts: total=0,1048576,0,0 short=0,0,0,0 dropped=0,0,0,0

     latency   : target=0, window=0, percentile=100.00%, depth=64

 

Run status group 0 (all jobs):

  WRITE: bw=13.5MiB/s (14.1MB/s), 13.5MiB/s-13.5MiB/s (14.1MB/s-14.1MB/s), io=4096MiB (4295MB), run=303800-303800msec



c)Random read/write performance

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75 

 

test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64

fio-3.7

Starting 1 process

Jobs: 1 (f=1): [m(1)][100.0%][r=19.6MiB/s,w=6802KiB/s][r=5028,w=1700 IOPS][eta 00m:00s]

test: (groupid=0, jobs=1): err= 0: pid=183010: Tue Apr  6 09:24:34 2021

   read: IOPS=4905, BW=19.2MiB/s (20.1MB/s)(3070MiB/160217msec)

   bw (  KiB/s): min= 6160, max=26592, per=99.98%, avg=19616.80, stdev=4211.44, samples=320

   iops        : min= 1540, max= 6648, avg=4904.17, stdev=1052.85, samples=320

  write: IOPS=1639, BW=6558KiB/s (6715kB/s)(1026MiB/160217msec)

   bw (  KiB/s): min= 2208, max= 8592, per=99.98%, avg=6555.74, stdev=1411.28, samples=320

   iops        : min=  552, max= 2148, avg=1638.91, stdev=352.81, samples=320

  cpu          : usr=2.25%, sys=6.60%, ctx=585006, majf=0, minf=26

  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%

     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%

     issued rwts: total=785920,262656,0,0 short=0,0,0,0 dropped=0,0,0,0

     latency   : target=0, window=0, percentile=100.00%, depth=64

 

Run status group 0 (all jobs):

   READ: bw=19.2MiB/s (20.1MB/s), 19.2MiB/s-19.2MiB/s (20.1MB/s-20.1MB/s), io=3070MiB (3219MB), run=160217-160217msec

  WRITE: bw=6558KiB/s (6715kB/s), 6558KiB/s-6558KiB/s (6715kB/s-6715kB/s), io=1026MiB (1076MB), run=160217-160217msec




d) Sequential read performance

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=64k --iodepth=64 --size=4G --readwrite=read

 

Starting 1 process

Jobs: 1 (f=1): [R(1)][100.0%][r=504MiB/s,w=0KiB/s][r=8062,w=0 IOPS][eta 00m:00s]

test: (groupid=0, jobs=1): err= 0: pid=158718: Tue Apr  6 11:20:54 2021

   read: IOPS=7258, BW=454MiB/s (476MB/s)(4096MiB/9029msec)

   bw (  KiB/s): min=392832, max=528896, per=99.94%, avg=464278.56, stdev=46306.52, samples=18

   iops        : min= 6138, max= 8264, avg=7254.28, stdev=723.63, samples=18

  cpu          : usr=2.19%, sys=12.90%, ctx=54243, majf=0, minf=1051

  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%

     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%

     issued rwts: total=65536,0,0,0 short=0,0,0,0 dropped=0,0,0,0

     latency   : target=0, window=0, percentile=100.00%, depth=64

 

Run status group 0 (all jobs):

   READ: bw=454MiB/s (476MB/s), 454MiB/s-454MiB/s (476MB/s-476MB/s), io=4096MiB (4295MB), run=9029-9029msec




e) Sequential write performance

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=64k --iodepth=64 --size=4G --readwrite=write

Starting 1 process

Jobs: 1 (f=1): [W(1)][100.0%][r=0KiB/s,w=81.6MiB/s][r=0,w=1305 IOPS][eta 00m:00s]

test: (groupid=0, jobs=1): err= 0: pid=95455: Tue Apr  6 13:18:31 2021

  write: IOPS=1515, BW=94.7MiB/s (99.3MB/s)(4096MiB/43250msec)

   bw (  KiB/s): min=36864, max=114331, per=100.00%, avg=97049.01, stdev=16814.38, samples=86

   iops        : min=  576, max= 1786, avg=1516.36, stdev=262.71, samples=86

  cpu          : usr=1.02%, sys=3.02%, ctx=17717, majf=0, minf=24

  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%

     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%

     issued rwts: total=0,65536,0,0 short=0,0,0,0 dropped=0,0,0,0

     latency   : target=0, window=0, percentile=100.00%, depth=64

 

Run status group 0 (all jobs):

  WRITE: bw=94.7MiB/s (99.3MB/s), 94.7MiB/s-94.7MiB/s (99.3MB/s-99.3MB/s), io=4096MiB (4295MB), run=43250-43250msec



Summary Report:

 

a) Random read performance

 iops        : min=13384, max=28254, avg=22832.09, stdev=3937.17, samples=91

READ: bw=89.2MiB/s (93.5MB/s), 89.2MiB/s-89.2MiB/s (93.5MB/s-93.5MB/s), io=40       

 

b)Random write performance

iops        : min= 1422, max= 4134, avg=3450.45, stdev=417.77, samples=607

WRITE: bw=13.5MiB/s (14.1MB/s), 13.5MiB/s-13.5MiB/s (14.1MB/s-14.1MB/s), io=4096MiB 

 

c)Random read/write performance

  read iops        : min= 1540, max= 6648, avg=4904.17, stdev=1052.85, samples=320

 WRITE  iops        : min=  552, max= 2148, avg=1638.91, stdev=352.81, samples=320

  

   READ: bw=19.2MiB/s (20.1MB/s), 19.2MiB/s-19.2MiB/s (20.1MB/s-20.1MB/s), io=3070MiB (3219MB), run=160217-160217msec

  WRITE: bw=6558KiB/s (6715kB/s), 6558KiB/s-6558KiB/s (6715kB/s-6715kB/s), io=1026MiB (1076MB), run=160217-160217msec

 

d) Sequential read performance

 iops        : min= 6138, max= 8264, avg=7254.28, stdev=723.63, samples=18

    READ: bw=454MiB/s (476MB/s), 454MiB/s-454MiB/s (476MB/s-476MB/s), io=4096MiB (4295MB), run=9029-9029msec

 

e) Sequential write performance

   iops        : min=  576, max= 1786, avg=1516.36, stdev=262.71, samples=86

  WRITE: bw=94.7MiB/s (99.3MB/s), 94.7MiB/s-94.7MiB/s (99.3MB/s-99.3MB/s), io=4096MiB (4295MB), run=43250-43250msec

 

5) Compare results with the ones from documentation

https://techdocs.broadcom.com/us/en/ca-enterprise-software/it-operations-management/dx-platform-on-premise/23-1/installing/Hardware-software-requirements.html#concept.dita_d7ab03039be1dd1c0bbba340c2848b6d7d6349f2_RecommendedThroughputNFS

Compare above highlighted values with the ones available from Techdocs

 

Additional Information

https://techdocs.broadcom.com/us/en/ca-enterprise-software/it-operations-management/dx-platform-on-premise/23-1/installing/Hardware-software-requirements.html#concept.dita_d7ab03039be1dd1c0bbba340c2848b6d7d6349f2_RecommendedThroughputNFS