Upgrading a large DLP database using Oracle GoldenGate to minimise downtime during Oracle Upgrade.
search cancel

Upgrading a large DLP database using Oracle GoldenGate to minimise downtime during Oracle Upgrade.

book

Article ID: 245107

calendar_today

Updated On:

Products

Data Loss Prevention

Issue/Introduction

 

A customer is performing a database migration from Oracle 12c to 19c. Their approach is, export the data from old 12c DB and import it to the new 19c DB and enable the forward replication from 12c to 19c until they cutover to the 19c database.

 

They report that they are facing issues where replication on the target 19c system is failing with “ORA-26799: unique constraint PROTECT.VONTU_STATS_COLLECTION_LOG_PK violated for table PROTECT.VONTU_STATS_COLLECTION_LOG with column values ("COLLECTION_LOG_ID") = (1074861)”  & the replication aborts due to the same. 

Resolution

Oracle Enterprise Databases are out-of-scope for Broadcom support as we just support the schema running on a Oracle Enterprise database.

When database issues occur they are the responsibility of the DBA with support from Oracle to resolve. We just support the DLP product interface with the schema and our code within our schema for which we will fix any schema bugs necessary.

Customers using Oracle Enterprise databases free and expected to manage the database instance itself (including backup and recovery) so long as they don't alter our schema (DDL/DML SQL directly against our schema) without being instructed to do so by Engineering or have a support exception for what changes they want to do with our schema (for example switching our schema to use a different tablespace).

While GG has not been certified by Engineering as a migration option, we have had other customers successfully use it to do the migration where they had the expertise needed to successfully implement it and ensure that all schema objects where fully migrated in their current state, and made sure URT passed before and after migration, and took responsibility for resolving GG issues.

In this case the issue is clearly with the GG implementation given that GG replicator is trying to execute an insert DML operation that had already been completed on the target database. The DBA has to make sure that the log replication starts with the SCN that is after the last SCN that was included in the data pump export. Thus the replicator grabbed data from a log that included DML operations that were committed in the data pump export. This may also be an issue that the customer didn't have a large enough undo tablespace so that the entire export could use the stated snapshot and ended up with newer data than the proscribed snapshot date.. Again this is all part of the GG implementation which the customer owns.