You may run into issues where maintenance such as changing the vCenter IP, incorrect user permissions, etc results in increased logging and larger than expected syslog files. Multiple vCenter services are included in the syslog file and it needs to be analyzed to see what the source of the volume is.
This article can be used to determine what's contributing to the syslog file by looking at the logging levels (INFO, NOTICE, WARN, ERROR, etc) and the vCenter services (sps, vpxd-main, etc) that are in the syslog file.
The issue may also be noticed by increased Splunk volume. Depending on the size of the increase, you may want to stop sending logs to Splunk temporarily, identify the source of the issue in syslog, and then resume sending to Splunk after syslog is stabilized.
VMware vCenter Server 8.0 U3
Review the sylog file to determine what the source of the messages is.
1. Get a breakdown of what's in the logs by severity. In this case the bulk was from INFO.
└─$ cat syslog.sample | grep INFO | wc -l25,434,729
└─$ cat syslog.sample | grep NOTICE | wc -l0
└─$ cat syslog.sample | grep WARN | wc -l2,990,962
└─$ cat syslog.sample | grep ERROR | wc -l9,449
└─$ cat syslog.sample | grep CRIT | wc -l0
└─$ cat syslog.sample | grep ALERT | wc -l49,007
└─$ cat syslog.sample | grep EMERG | wc -l0
The following article can be used if you want to change the logging level (ie, stop sending INFO)
Configure desired level of vCenter logs sent to Remote Syslog Server
https://knowledge.broadcom.com/external/article?articleNumber=345261
2. Get a list of services that syslog is collecting:
─$ awk '{print $8}' syslog.sample > syslog.sample.services.txt
3. Get a count of each service to see what stands out└─$ cat syslog.sample.services.txt | sort | uniq -c
In this example sps, vpxd-main, vpxd-svcs-main were the main contributors
54 3966 analytics 9 certificatemanagement-gclog 2070 certificatemanagement-svcs 47864 cis-license 78 cloudvm-ram-size 2452 content-library 1435 CROND 47874 eam-access 189699 eam-api 3388 eam-main1803633 envoy-access 138 envoy-main 1373 gclog 3769 lookupsvc-lookupserver-default 1300 observability-main 73 rhttpproxy-main 682 rsyslogd 20 run-parts 69 sca-gc6076884 sps 4194 sps-gc 19237 ssoadminserver 20107 sso-identity-perf 8666 sso-identity-sts 28 sso-identity-sts-default 103 sso-websso 774 sudo 78 systemd 270 tokenservice 153256 trustmanagement-svcs 337 ui-apigw 4 ui-changelog 602 ui-dataservice 6 ui-gc 2317 ui-main 1 ui-opid 1042 ui-threadmonitor 236 ui-vspheremessaging 21812 vapi-endpoint3777094 vapi-endpoint-access 3722 vapi-gc 17823 vapi-jetty 2330 vcsa-audit 65 vdtc-main 1 vlcm-vlcm 4238 vmon 10193 vpxd5261962 vpxd-main 708 vpxd-profiler1677371 vpxd-svcs-access1518084 vpxd-svcs-main10321723 vpxd-svcs-perf 3 vpxd-svcs-runtime-err 380755 vsan-health-main 1026 vsm-main 3728 vstats 5899 vum-vmacore 4629 wcpsvc
4. Due to the size of the syslog file, extract the logs for each service to their own file:
grep -i "sps" syslog.sample > sps.txt
grep -i "vpxd-main" syslog.sample > vpxd-main.txt
etc
5. Review the logs for the individual services:
6. vpxd-main & vpxd-svcs-main:
vpxd-main.txt & vpxd-svcs-main.txt were full of permission related messages for a backup user:vpxd-main - - - 2025-08-07T06:56:51.647Z info vpxd[13733] [Originator@6876 sub=UserDirectorySso opID=#### Authz-c5] GetUserInfoInternal(DOMAIN\backupuser, false) res: DOMAIN\backupuser
vpxd-svcs-main - - - 2025-08-17T06:56:26.030Z [tomcat-exec-173 [] WARN com.vmware.cis.authorization.impl.AclPrivilegeValidator opId=######] User DOMAIN\backupuser does not have privileges [System.Read] on object urn%3Avmomi%3AInventoryServiceTag%3######%3AGLOBAL
This was resolved by adding permissions for the backup user at the global level so it could have access to tags
Check Global Permissions, see if the user is listed with the Administrator role, and if Propagate to children is selected.
Also, verify the permissions under Tags & Custom Attributes
7. sps:
The following entries were providing the majority of the volume:
840847 com.vmware.spbm.domain.vp.VendorProvider
5124329 com.vmware.vim.storage.common.task.CustomThreadPoolExecutor
Adjust the logging level by following these steps:
1. SSH into the vCenter Server Appliance (VC) using a terminal.
2. Open the log4j2.properties file for editing:
- Path: `/usr/lib/vmware-vpx/sps/conf/log4j2.properties`
3. In the file:
logger.oputil.name=com.vmware.vim.storage.common.util.OperationIdUtil (already added) logger.vcqimpl.name=com.vmware.vim.storage.common.vc.impl.VcQueryImpl (already added) logger.stc.name=com.vmware.vim.sms.provider.vasa.feed.UsageContextListener logger.osct.name=com.vmware.sps.pbm.compliance.ObjectStorageComplianceTask logger.esb.name=com.vmware.spbm.domain.util.EntitySubjectBuilder logger.cp.name=com.vmware.sps.pbm.compliance.ComplianceProcessor logger.vp.name=com.vmware.spbm.domain.vp.VendorProvider logger.ctpe.name=com.vmware.vim.storage.common.task.CustomThreadPoolExecutor logger.oputil.level=ERROR (already added) logger.vcqimpl.level=ERROR (already added) logger.stc.level=ERROR logger.osct.level=ERROR logger.esb.level=ERROR logger.cp.level=ERROR logger.vp.level=ERROR logger.ctpe.level=ERROR4. Save and close the log4j2.properties file.
5. Restart the SPS service by running:
```
vmon-cli -r sps
```
The logging returned to normal levels after making the above changes