Spring Cloud Gateway Controller is throwing errors and exceptions similar to the following:
2025-10-20T09:32:11.480Z ERROR 1 --- [pool-6-thread-1] i.k.c.informer.cache.ProcessorListener : processor interrupted: {}
2025-10-20T09:32:11.480Z ERROR 1 --- [ingCloudGateway] i.k.client.informer.cache.Controller : DefaultController#processLoop get interrupted null
at io.kubernetes.client.informer.cache.Controller.run(Controller.java:146) ~[client-java-19.0.3.jar:na]
at java.base/java.lang.Thread.run(Unknown Source) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) ~[na:na]
at io.kubernetes.client.informer.cache.ProcessorListener.run(ProcessorListener.java:58) ~[client-java-19.0.3.jar:na]
at java.base/java.util.concurrent.LinkedBlockingQueue.take(Unknown Source) ~[na:na]
java.lang.InterruptedException: null
at io.kubernetes.client.informer.cache.DeltaFIFO.pop(DeltaFIFO.java:318) ~[client-java-19.0.3.jar:na]
2025-10-20T09:32:11.479Z ERROR 1 --- [dGatewayMapping] i.k.client.informer.cache.Controller : DefaultController#processLoop get interrupted null
2025-10-20T09:32:11.479Z ERROR 1 --- [ool-19-thread-1] i.k.c.informer.cache.ProcessorListener : processor interrupted: {}
at io.kubernetes.client.informer.cache.DeltaFIFO.pop(DeltaFIFO.java:318) ~[client-java-19.0.3.jar:na]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) ~[na:na]
at java.base/java.lang.Thread.run(Unknown Source) ~[na:na]
at io.kubernetes.client.informer.cache.Controller.run(Controller.java:146) ~[client-java-19.0.3.jar:na]
at io.kubernetes.client.informer.cache.Controller.processLoop(Controller.java:182) ~[client-java-19.0.3.jar:na]
java.lang.InterruptedException: null
Spring Cloud Gateway for Kubernetes
These errors are caused by the container getting shut down. They are normal when the kubernetes java client gets stopped
If the container was expected to get stopped, then no action needs to be taken because the errors are normal in this case
If the container was stopped unexpectedly, you should investigate and troubleshoot the reason why it was stopped (pod eviction, node maintenance, etc)