1. General settings
pgbouncer
. We had to use pgpool
for 1 specific application that doesn’t close properly the connections. Make sure customer application close the connections. nf_coontrack
, it is better: from the version 4.0 of the market place, it is disabled
2. The kernel parameters - based on Microsoft recommendations
a. net.ipv4.conf.all.arp_filter = 1 (old value 0)
b. net.ipv4.tcp_fin_timeout = 60 (old value 45)
c. net.ipv4.tcp_max_tw_buckets = 130000 (old value 262144)
d. net.ipv4.tcp_tw_recycle = 1 (old value 0)
e. net.ipv4.tcp_tw_reuse = 1 (old value 0)
f. vm.min_free_kbytes = 1731094 (old value 42987) on master
g. vm.min_free_kbytes = 3465106 (old value 42987) on segment
3. Greenplum parameters - based on Pivotal Engineering and support recommendations
a. gp_external_max_segs reduced=24 (old value 64) in order to reduce the number of flows
b. gp_gpperfmon_send_interval=5 (old value 1)
c. gp_max_packet_size=8192 (old value 1350)
d. gp_cached_segworkers_threshold=200 (old value 5)
e. gp_vmem_idle_resource_timeout=480s (old value 18s)
4.Data environment changes
● Upgrade DataDirect ODBC Driver to the latest (7.1.6) Old ODBC driver cause performance issues
● Increase the option -w of gpfdist (from 0 to 10)
Additional Resources:
https://docs.microsoft.com/bs-latn-ba/azure/virtual-network/virtual-machine-network-throughput