SRM Reprotect fails with error: Cannot process consistency group (or device) with role 'target' when expected consistency group (or device) with role 'promotedTarget'
search cancel

SRM Reprotect fails with error: Cannot process consistency group (or device) with role 'target' when expected consistency group (or device) with role 'promotedTarget'

book

Article ID: 384140

calendar_today

Updated On:

Products

VMware Live Recovery

Issue/Introduction

Symptom:

  • When performing a re-protect task through SRM after failover an error is encountered:

    Failed to reverse replication for failed over devices. Cannot process consistency group 'SRM_XXX_XXX' with role 'target' when expected consistency group with role 'promotedTarget'.

  • The error could be a slight different as follows:
    (a 'device' could be mentioned instead of 'consistency group' )

    Failed to reverse replication for failed over devices. Cannot process device 'xxxxxx" with role 'target' when expected device with role promotedTarget'.
     
  • You may also see the below message when you are trying to run discover devices under array pairs.

Duplicate WWN '62:##:##:##:##:E8:##:F3:##:##:4A:##:##:##:##:##' found for devices 'peer-of-2######e-####-####-####-b545####59d8:vgLyty/DS_Name' and 'DS_Name' in SRA's 'discoverDevices' response.

Example:

Environment

VMware Site Recovery Manager 8.x

VMware Live Recovery 9.x

Cause

  • This error occurs when attempting to reprotect a consistency group in Site Recovery Manager (SRM) and the role of the consistency group is not as expected.

  • Replication configurations between the primary and secondary sites are not be synchronized.

  • In VC1 site's SRM logs ( /opt/vmware/support/logs/srm/vmware-dr.log), we see below entries -

    2025-12-01T16:16:59.962Z error vmware-dr[154731] [SRM@6876 sub=RemoteTask.dr.storage.ReplicatedArrayPair.discoverLocalDevices10437 opID=af2e5c93-####-####-####-############-failover:cd70:fb47:3d0e-deactivate:bc47 ti
    d=dr.storage.ReplicatedArrayPair.discoverLocalDevices10437.DiscoverLocalDevices] The remote task 'vim.Task:a4b7a9c5-####-####-####-############:dr.storage.ReplicatedArrayPair.discoverLocalDevices10437' failed:
    --> (dr.storage.fault.DuplicateWwn) {
    -->    faultCause = (vmodl.MethodFault) null,
    -->    faultMessage = <unset>,
    -->    command = "discoverDevices",
    -->    responseXml = "<Identity>
    -->   <Wwn/>
    --> </Identity>",
    -->    id = "peer-of-2######e-####-####-####-b545####59d8:vgLyty/DS_Name",
    -->    otherId = "vgLyty/DS_Name",
    -->    wwn = "62:##:##:##:##:E8:##:F3:##:##:4A:##:##:##:##:##"
    -->    msg = "Duplicate WWN '62:##:##:##:##:E8:##:F3:##:##:4A:##:##:##:##:##' found for devices 'peer-of-2######e-####-####-####-b545####59d8:vgLyty/DS_Name' and 'vgLyty/DS_Name' in SRA's 'discoverDevices' response."

     

    2025-12-01T19:27:56.648Z verbose vmware-dr[01758] [SRM@6876 sub=DrTask ctxID=20b9aa18 opID=4f4af8f4-####-####-####-############-retrieveHistoryWindow] [520ae] Task 'dr.recovery.RecoveryHistoryManager.retrieveHistory
    Window582530' completed with result: (dr.recovery.RecoveryResult) [
    -->    (dr.recovery.RecoveryResult) {
    -->       runKey = 12134935,
    -->       operation = "reprotect",
    -->       options = (dr.recovery.RecoveryOptions) {
    -->          note = <unset>,
    -->          syncData = false,
    -->          plannedFailover = false,
    -->          migrateEligibleVms = true,
    -->          skipProtectionSiteOperations = true,
    -->          autoAnswerPrompts = false,
    -->          recoveryPoint = <unset>
    -->       },
    -->       plan = 'dr.recovery.RecoveryPlan:4b323d8a-####-####-####-############:780e20de-####-####-####-############',
    -->       planName = "Recovery_Plan",
    -->       planDescription = "",
    -->       user = "####",
    -->       startTime = "2025-12-01T19:27:23.453009Z",
    -->       stopTime = "2025-12-01T19:27:24.001819Z",
    -->       executionTimeInSeconds = 1,
    -->       totalPausedTimeInSeconds = 0,
    -->       resultState = "errors",
    -->       warningCount = 0,
    -->       errorCount = 1,
    -->       poweredOnVms = 0,
    -->       errorStateVms = 0,
    -->       successfullyRecoveredVms = 0,
    -->       ipCustomizedVms = 0,
    -->       errorIpCustomizedVms = 0,
    -->       poweredOffVms = 0,
    -->       warnings = <unset>,
    -->       errors = (vmodl.MethodFault) [
    -->          (dr.storageProvider.fault.StorageReverseReplicationFailed) {
    -->             faultCause = (dr.storage.fault.InvalidDeviceRole) {
    -->                faultCause = (vmodl.MethodFault) null,
    -->                faultMessage = <unset>,
    -->                id = "peer-of-2######e-####-####-####-b545####59d8:vgLyty/DS_Name",
    -->                role = "target",
    -->                expectedRole = "promotedTarget"
    -->                msg = "Cannot process device 'peer-of-2######e-####-####-####-b545####59d8:vgLyty/DS_Name' with role 'target' when expected device with role 'promotedTarget'."
    -->             },
    -->             faultMessage = <unset>
    -->             msg = "Failed to reverse replication for failed over devices. Cannot process device 'peer-of-2######e-####-####-####-b545####59d8:vgLyty/DS_Name' with role 'target' when expected device with role 'promotedTarget'."
    -->          }
    -->       ]
    -->    },

     
  • Analysis of the SRA discover devices log (/opt/vmware/support/logs/srm/SRAs/) output reveals that both the source and target devices for the DS_Name volume are incorrectly reported in a Read-Write state.
     
    <SourceDevices>
          <SourceDevice id="vgLyty/DS_Name" state="read-write">
            <Name>vgLyty/DS_Name</Name>
            <TargetDevice key="peer-of-2######e-####-####-####-b545####59d8:vgLyty/DS_Name" />
            <Identity>
              <Wwn>62:##:##:##:##:E8:##:F3:##:##:4A:##:##:##:##:##</Wwn>
            </Identity>
          </SourceDevice>
        </SourceDevices>
     <TargetDevice key="peer-of-2######e-####-####-####-b545####59d8:vgLyty/DS_Name" id="peer-of-2######e-####-####-####-b545####59d8:vgLyty/DS_Name" state="read-write">
            <Name>Replica of Array 1:vgLyty/DS_Name</Name>
            <Identity>
              <Wwn>62:##:##:##:##:E8:##:F3:##:##:4A:##:##:##:##:##</Wwn>
            </Identity>
          </TargetDevice>

    This clearly indicates that the status of the device is not updated correctly on the storage array due to which it detects the WWN as duplicate. In a working environment, the device would be read-only in one site and read-write on the other.

 

Resolution

  • In SRM, execute the "Discover Devices" task on the array pair to retrieve the list of replicated devices and their respective replication directions.

  • If the replicated device list does not populate or an error occurs during the device discovery process, consult the storage vendor to verify the device status.

  • Verify the current state and role of the device group on both sites using the storage array management tool or command-line utilities.

  • Ensure that the replication settings on both sites are consistent and align with the required configurations for SRM.

  • Once the device group is in the correct role, attempt the reverse replication operation again through SRM.

  • If the issue persists, engage the storage vendor to investigate the underlying storage array for any additional errors or warnings that could indicate the root cause.

  • Additionally, verify the SRA (Storage Replication Adapter) and consistency group settings to confirm they are correctly configured. If the problem continues, consider opening a support request with the storage vendor for further assistance.