The OPA service fails in one environment after deploying with setting "--set ssp.featureFlags.accesscontrol.enabled=true". The same version (2.2.0.1466) was working in that region and is currently working in other regions, and a different development region is working with this setting. This issue persists after after reverting / removing the variable.
OPA pods fail with the following events:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 28m (x126 over 13h) kubelet Pulling image "hubble.example.net:5000/example/linux/ae/vendor/opa:2.2.0.1466"
Warning BackOff 8m2s (x1486 over 13h) kubelet Back-off restarting failed container
Warning Unhealthy 3m3s (x3164 over 13h) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500
Logs print a 'rego_unsafe_var_error':
Defaulted container "ssp-opa" out of: ssp-opa, appd-injector (init)
{"level":"error","msg":"Bundle activation failed: 1 error occurred: ./x6f9bc63dxad6fx419ex9e61xf97c96e61a72auth.rego:231: rego_unsafe_var_error: var resN is unsafe","name":"authz","plugin":"bundle","time":"2024-03-19T14:19:15Z"}
{"level":"error","msg":"Bundle activation failed: 1 error occurred: ./x6f9bc63dxad6fx419ex9e61xf97c96e61a72auth.rego:231: rego_unsafe_var_error: var resN is unsafe","name":"authz","plugin":"bundle","time":"2024-03-19T14:20:15Z"}
Does enabling accesscontrol require other changes to policies?
Also, is it possible there was a bad policy created prior to redeployment, which now blocks service startup? The "rego_unsafe_var_error" started appearing intermittently a few hours before redeploying, mixed into mostly successful logs. However now we are only seeing rego failures in the service logs.
VIP AuthHub 2.2
The policy rule contains special characters