The following documents the various changes and recommendations to improve I/O in this storage change scenario using PURE storage. One or more of these changes might be needed to resolve similar I/O problems when using the PURE storage solution.
Is there a way to compare numbers before and after the storage change?
- One option to see what changes are needed, or what differences there are before and after the storage change, is the dr_validate script and it's logs.
- Run the script after the storage change.
- Compare it's logs for the current config to the ones run at the latest upgrade/install cycle. What is different?
- Do things improve if the new systems are configured to match the old?
- The logs are found in (default path) /opt/CA/IMDataRepository_vertica<version>/logs.
- The new install doc pages for the DR system have a good review of the things validate will change/check. Its found here:
- If vioperf data is available from before the storage change, run it again after the change. Compare the before and after results. Is it better or worse?
What other configurations might be made or recommended to help improve I/O?
- In relation to fast NVMe drives the FlashArray configuration is slower.
- Ensure the scheduler settings and other configs like readahead and max_sectors_kb are correct on all nodes.
- Vertica suggests that noop is the better choice than deadline in this situation.
- This is from the Vertica docs where we see both are acceptable.
- "This topic details how to change I/O Scheduling to a supported scheduler. Vertica requires that I/O Scheduling be set to deadline or noop."
- In this scenario, with it configured to deadline while observing poor performance, changing the scheduler to noop improves the I/O.
- Additional information regarding the noop setting:
- "The NOOP scheduler uses a simple FIFO approach, placing all input and output requests into a single queue. This scheduler is best used on solid state drives (SSDs). Because SSDs do not have a physical read head, no performance penalty exists when accessing non-adjacent sectors."
- Using the following command should set the disk(s) involved to use noop.
- echo "noop" /sys/block/<DEVICENAME>/queue/scheduler
- It may require a reboot to implement the change.
- Further information for RedHat Linux can be found here:
- It is also worth checking /etc/rc.local for lines that reference deadline for the same <DEVICENAME> used in the echo command.
- Update any found to noop for the same <DEVICENAME>.
- It is very important that the cluster nodes are configured with the same CPU core count, and the same amount of RAM.
- In this scenario one node was configured with 32 CPU cores via hyperthreading (HT) configurations.
- The other two nodes had HT disabled and were using 16 CPU cores.
- The 32 core nodes provided better I/O than the 16 core node.
- Engineering recommends that if the desire is to have HT enabled on all nodes ensure vioperf shows good numbers when running with 32 cores vs 16.
- Running at 32 cores via HT depends on overall memory for system, and vioperf results at 32 threads writing/reading at same time.
- Memory is key to give each query a good allotment of memory to run initially.
- The higher the CPU count, the smaller it is if using AUTO for plannedconcurrency.
- For example if RIB is 10% with plannedconcurrency of AUTO with 64 GB RAM it means 6.4GB is reserved for RIB. Divide that by plannedconcurrency:
- 6.4GB / 32 cpu = 200MB/query
- 6.4GB / 16 cpu = 400MB/query
- Vertica may use a number less than 32 depending on overall memory, but that example is worst case scenario.
- Ensure the boot device is properly configured for multipath. Below are sample parameters for the multipath configuration.
path_selector "queue-length 0"
hardware_handler "1 alua"
What size I/O is the vioperf using to perform it's tests?
Vertica asks for readahead of 2048 and blocksize of 4096 to operate properly. It would be using 4096 as the value for vioperf performance checks.