This is a known issue, currently there is no resolution.
Workaround:
The workaround is to scale the cluster up significantly or suspend the scheduling of the IC sparkapp via these commands:
kubectl edit scheduledsparkapp spark-app-infra-classifier-pyspark
# insert this line, just above spec.schedule
suspend: "true"
The result of suspending scheduling of the IC jobs is that the workload database will keep any classifications it currently has going forward, but will not get any new updates. It will still be possible to specify your own classifications for workloads, but no automated inferences will be made going forward, until there is adequate memory to run the IC job AND the `suspend: "true"` line is changed to `suspend: "false"` (or removed entirely).
Delete the IC driver pod and sparkapp for the current run using this command:
kubectl delete pod -n nsxi-platform spark-app-infra-classifier-pyspark-1675750518779665823-driver
kubectl get sparkapp -n nsxi-platform | grep infra
# find the entry that looks like `spark-app-infra-classifier-pyspark-#########`
kubectl delete sparkapp spark-app-infra-classifier-pyspark-######### -n nsxi-platform