/var/vcap/sys/log/credhub/credhub.log or /var/vcap/sys/log/cloud-controller-worker/cloud-controller-worker.log for example) will show errors like:2026-04-07T05:26:26.405Z [https-jsse-nio-8844-exec-5] .... WARN — SqlExceptionHelper: SQL Error: 1452, SQLState: 23000 2026-04-07T05:26:26.405Z [https-jsse-nio-8844-exec-5] .... ERROR — SqlExceptionHelper: (conn=765380) Cannot add or update a child row: a foreign key constraint fails (`credhub`.`encrypted_value`, CONSTRAINT `encryption_key_uuid_fkey` FOREIGN KEY (`encryption_key_uuid`) REFERENCES `encryption_key_canary` (`uuid`) ON DELETE RESTRICT ON UPDATE RESTRICT) 2026-04-07T05:26:26.411Z [https-jsse-nio-8844-exec-5] .... ERROR — ExceptionHandlers: Value exceeds the maximum size.{"timestamp":"2026-04-08T06:05:06.916715013Z","message":"Request failed: 500: {\"error_code\"=\u003e\"UnknownError\", \"description\"=\u003e\"An unknown error occurred.\", \"code\"=\u003e10001, \"test_mode_info\"=\u003e{\"description\"=\u003e\"Mysql2::Error: Cannot add or update a child row: a foreign key constraint fails (`ccdb`.`routes`, CONSTRAINT `fk_routes_space_id` FOREIGN KEY (`space_id`) REFERENCES `spaces` (`id`) ON DELETE RESTRICT ON UPDATE RESTRICT)\", \"error_code\"=\u003e\"CF-ForeignKeyConstraintViolation\",
/var/vcap/sys/log/pxc-mysql/mysql.err.log on all 3 MySQL nodes will return errors like:2026-04-08T13:14:28.725702Z 9 [ERROR] [MY-010584] [Repl] Replica SQL: Could not execute Write_rows event on table credhub.encrypted_value; Cannot add or update a child row: a foreign key constraint fails (credhub.encrypted_value, CONSTRAINT encryption_key_uuid_fkey FOREIGN KEY (encryption_key_uuid) REFERENCES encryption_key_canary (`uuid`) ON DELETE RESTRICT ON UPDATE RESTRICT), Error_code: 1452; handler error HA_ERR_NO_REFERENCED_ROW; the event's source log FIRST, end_log_pos 0, Error_code: MY-001452
mysql-diag run from the mysql-monitor VM will report one of the nodes in Inconsistent STATE with CLUSTER STATUS as Disconnected. This condition may or may not be present in the mysql-diag output, depending on if the problem is impacting the Primary node or a Secondary node.This problem was observed in Elastic Application Runtime (TAS) 10.2.6 and .7 with Credhub tile 1.6.7. This problem may appear in any components dependent on the EAR/TAS MySQL database specifically when configured with a 3 node cluster.
NOTE: This problem has also been observed in Cloud Controller components as well as Credhub.
The TAS/EAR 3-node mysql cluster is out of sync, even though mysql-diag says they are synced. From a cluster perspective, Synced means the node was a healthy Galera member applying the cluster transactions. It does not necessarily assert per-table consistency or per-ForeignKey equality across nodes. Inconsistency on a single node in the cluster in the ForeignKey value between tables on which the constraint is applied leads to the errors noted in the Issue/Introduction. Because of this, an INSERT one one node succeeds, but on replication, one of the other members failed to apply it parent table row does not exist.
A manual INSERT request from the MySQL node will force a Primary vote, which exposes the problem node and moves it into an Inconsistent STATE with CLUSTER STATUS Disconnected. The problem node will not automatically restart or rejoin the cluster if a manual INSERT query is sent.
The cause of this inconsistent state is under investigation.
Reference Manually forcing a MySQL node to rejoin the HA cluster for details and command syntax on the below steps
Run mysql-diag from and SSH to the mysql_monitor instance to find the node currently serving traffic through the proxy.
mysql-diag output will report in yellow the current node traffic is proxied to like this:NOTE: Proxies will currently attempt to direct traffic to "mysql/########-####-####-####-1088cab2fc02"bosh ssh into the node to which traffic is being proxied and run:
# sudo monit stop galera-initCheck if the "foreign key constraint" errors stop and other 2 nodes build up a healthy cluster (use mysql-diag again from the mysql_monitor VM to ensure healthy cluster). If these conditions are met, the node on which galera-init was stopped is the problem node.
If the errors stopped and the other 2 nodes build a healthy cluster, back up the local data on the problem node to force a fresh sync:
# sudo mv /var/vcap/store/pxc-mysql /var/vcap/store/pxc-mysql-backup
Restart the service:
# sudo monit start galera-initforeign key constraint" errors do not repeat.
To assist with investigation, if you encounter the ForeignKey Constraint failure on TAS/EAR 10.2.x, please gather the following from all 3 MySQL nodes, create a Support Request with the Tanzu by Broadcom team, then upload the logs and MySQL query outputs to the SR:
my.cnf on each node from: /var/vcap/jobs/pxc-mysql/config/my.cnf # sudo cp /var/vcap/jobs/pxc-mysql/config/my.cnf /var/vcap/sys/log/my.cnf.bak# sudo mysql --defaults-file=/var/vcap/jobs/pxc-mysql/config/mylogin.cnf -e "SHOW GLOBAL VARIABLES" > /var/vcap/sys/log/global_variables.txt# sudo mysql --defaults-file=/var/vcap/jobs/pxc-mysql/config/mylogin.cnf -e "SHOW ENGINE INNODB STATUS" > /var/vcap/sys/log/global_engine_innodb_status.txt# bosh logs -d <TAS_DEPLOYMENT_ID> mysql$DATADIR noted below will be: /var/vcap/store/pxc-mysql-backup $DATADIR/GRA_*.log (binlogs of failed Galera transactions)$DATADIR/*/*.ibd (affected tables only, on a failed node, if possible)