vSAN - VM Creation Workflow
search cancel

vSAN - VM Creation Workflow

book

Article ID: 326497

calendar_today

Updated On: 12-03-2024

Products

VMware vSAN

Issue/Introduction

This article provides detailed information on the process of virtual machines creation for VM's residing on a VMware vSAN datastore. Although the examples given here describe the successful creation of a VM on a vSAN, there is also information included which may be useful for diagnosing failures in deployment of such virtual machines.

Environment

VMware vSAN

Resolution

Overview

This is the workflow used by vSAN to create a virtual machine on the vSAN Datastore:
  1. Create Operation goes to the DOM_CLIENT.

  2. DOM_CLIENT identifies the DOM_OWNER.

  3. DOM_OWNER then sends the storage policy (SP) to the CLOM (DOM_OWNER doesn't process SP).

  4. CLOM returns the configuration needed to be used for the SP of the virtual machine.

  5. DOM_OWNER now knows where to create all the components to satisfy the SP of the virtual machine. It knows in which hosts and MDs to create the object's components.

  6. DOM_OWNER sends the create tasks to each of the Component Managers of the hosts.

  7. Component Managers then send the requests to the local LSOM of the host.

  8. LSOM writes the data to the MDs.

Note:
  • MD stands for Magnetic Disk / Capacity drive.

  • DOM stands for Distributed Object Manager.

  • CLOM stands for Cluster-Level Object Manager

  • LSOM stands for Local Log-Structured Object Manager

 

The VM Creation Workflow

Start by specifying the virtual machine name whose creation you are tracing.
 

1. Open the vpxd.log file in the vCenter Server

In this document we have created a virtual machine called WorkFlowVM with a default Storage Policy of FT=1:
1:43.094-07:00 [13072 info 'VmProv' opID=2408fce-ee] Setting the VM config path to ds:///vmfs/volumes/vsan:5257a7e0dd6f860e-bxxxxxxxxa/4028e354-2319-9031-af04-0xxxxxb/WorkFlowVM.vmx 2015-03-06T04:01:43.419-07:00 [13072 info 'Vpxd::Vm' opID=2408fce-ee] [VmMo::InitMinEVCKeyIfNecessary] Initializing min EVC key for VM WorkFlowVM [vim.VirtualMachine:vm-157] 2015-03-06T04:01:43.855-07:00 [13072 info 'vpxdvpxdVmomi' opID=2408fce-ee] [ClientAdapterBase::InvokeOnSoap] Invoke done (esxi1vsan.vcloud.local, vpxapi.VpxaService.createVm)--> This is Step 1 of the Creation Workflow.
[...]
2015-03-06T04:02:00.975-07:00 [13072 info 'commonvpxLro' opID=2408fce-ee] [VpxLRO] -- FINISH task-internal-36643 -- -- VmprovWorkflow --

You can see that the VM will be created on host esxi1vsan. This is the DOM_CLIENT.

Note: A very useful trick is to identify the opID (Operation ID) that relates to the task. You can easily track all the steps using the opID.In this example, the opID = 2408fce-ee.

2. Check vpxa.log of the DOM_CLIENT Host

To filter the vpxa.log for entries relevant to WorkFlowVM, execute this command on the DOM_CLIENT:
 
grep -i workflowvm /var/log/vpxa.log
 
You see log entries similar to this:
 
2015-02-17T11:38:40.390Z [FF88E1A0 verbose 'vpxavpxaDatastoreContext' opID=2408fce-ee-2d] [VpxaDatastoreContext] Resolved URL /vmfs/volumes/vsan:5257a7e0dd6f860e-bxxxxxx8a/WorkFlowVM to localPath /vmfs/volumes/vsan:5257a7e0dd6f860e-bxxxxxxa/WorkFlowVM
 
2015-02-17T11:38:40.767Z [FF88E1A0 info 'hostdds' opID=2408fce-ee-2d] [VpxaHalDSHostagent::CreateDirectory] root directory: '/vmfs/volumes/vsan:5257a7e0dd6f860e-bxxxxxxxa/', display name: 'WorkFlowVM'
 
2015-02-17T11:38:55.198Z [FF88E1A0 verbose 'vpxavpxaNameReserve' opID=2408fce-ee-2d] [VpxaNameReserve] Reserved directory only, name /vmfs/volumes/vsan:5257a7e0dd6f860e-bbxxxxxxx8a/WorkFlowVM, stable name /vmfs/volumes/vsan:5257a7e0dd6f860e-bb6d4a3699807c8a/4028e354-2319-9031-af04-0xxxxxx6b
 
2015-02-17T11:38:55.975Z [FF8F2B70 verbose 'vpxavpxaDatastoreContext' opID=2408fce-ee-95] [VpxaDatastoreContext] Resolved URL ds:///vmfs/volumes/vsan:5257a7e0dd6f860e-bbxxxxxx8a/4028e354-2319-9031-af04-00xxxxx6b/WorkFlowVM.vmx to localPath /vmfs/volumes/vsan:5257a7e0dd6f860e-bb6d4a3699807c8a/4028e354-2319-9031-af04-0xxxxxb/WorkFlowVM.vmx
 
2015-02-17T11:38:55.976Z [FF8F2B70 verbose 'vpxavpxaDatastoreContext' opID=2408fce-ee-95] [VpxaDatastoreContext] Resolved URL ds:///vmfs/volumes/vsan:5257a7e0dd6f860e-bb6d4a3699807c8a/4028e354-2319-9031-af04-00xxxxxxxb/WorkFlowVM.vmdk to localPath /vmfs/volumes/vsan:5257a7e0dd6f860e-bb6d4axxxxxx8a/4028e354-2319-9031-af04-0xxxxxxxb/WorkFlowVM.vmdk
 

3. Check hostd.log of the DOM_CLIENT Host

You can find further information on the hostd.log file on the DOM_CLIENT host where the virtual machine is being created and registered. Execute this command:grep -i workflowvm /var/log/hostd.log
 
You see these entries relating to WorkFlowVM:
 
2015-02-22T12:00:13.170Z [5AEC1B70 info 'Vimsvc.ha-eventmgr' opID=276c9677-4a-90 user=vpxuser] Event 412 : Creating WorkFlowVM2 on host esxvsan1 in ha-datacenter
2015-02-22T12:00:13.256Z [5AEC1B70 info'Vmsvc.vm:/vmfs/volumes/vsan:5257a7e0dd6f860e-bb6d4a3699807c8a/bec4e954-213b-5b71-b6ef-0xxxxxxxb/WorkFlowVM2.vmx'
2015-02-22T12:00:13.257Z [5AEC1B70 verbose 'Vmsvc' opID=276c9677-4a-90 user=vpxuser] CreateVmOnObjectStoreInt: Create VM initiated [17]:/vmfs/volumes/vsan:5257a7e0dd6f860e-bb6d4a3699807c8a/bec4e954-213b-5b71-b6ef-005056010e6b/WorkFlowVM2.vmx
[...]
2015-02-22T12:00:15.473Z [FFF40B70 verbose'Vmsvc.vm:/vmfs/volumes/vsan:5257a7e0dd6f860e-bb6d4axxxxxxa/bec4e954-213b-5b71-b6ef-00xxxxxb/WorkFlowVM2.vmx'] Checking accessibility for all storage objects for VM '17': /vmfs/volumes/vsan:5257a7e0dd6f860e-bb6d4a3699807c8a/bec4e954-213b-5b71-b6ef-00sssssssb/WorkFlowVM2.vmx (current state: VM_STATE_CREATING)
2015-02-22T12:00:15.475Z [FFF40B70 verbose 'Vmsvc.vm:/vmfs/volumes/vsan:5257a7e0dd6f860e-bb6d4xxxxxxc8a/bec4e954-213b-5b71-b6ef-005xxxx6b/WorkFlowVM2.vmx'] Storage objects for VM '17' are now accessible, attempting to reload (current state : VM_STATE_CREATING)
[...]
2015-02-22T12:00:48.159Z [5B140B70 info'Vmsvc.vm:/vmfs/volumes/vsan:5257a7e0dd6f860e-bb6d4axxxxxxa/bec4e954-213b-5b71-b6ef-00xxxxxb/WorkFlowVM2.vmx'] Checking for all objects accessibility (VM's current state: VM_STATE_OFF, stable? true)
[...]
2015-02-22T12:00:48.170Z [5B140B70 verbose 'Vmsvc.vm:/vmfs/volumes/vsan:5257a7e0dd6f860e-bb6dxxxxxxxxa/bec4e954-213b-5b71-b6ef-005xxxxb/WorkFlowVM2.vmx'] UpdateFileInfo: Failed to find file size for /vmfs/volumes/vsan:5257a7e0dd6f860e-bb6d4a3699807c8a/bec4e954-213b-5b71-b6ef-0xxxxxxxb/WorkFlowVM2.nvram: No such file or directory
 
--> Ignore the message related to nvram. This is normal because the VM has not been powered on.
 

Note: These log messages come from the creation of a different virtual machine, WorkFlowVM2. The logs in the previous steps were for WorkFlowVM. The workflow is the same and shows the same messages, however.

 

4. Use CMMDS to Identify the Elements Related to the VM

You see some DOM activity on the /var/log/vmkernel.log file of the DOM_CLIENT. Use the less command and search for the string "DOM: ": less /var/log/vmkernel.log
 
You see entries like this:
 
2015-02-17T11:38:40.854Z cpu1:1004885000)DOM: DOM2PC_InitSession:5553: Instantiating 2PC scanner operation for owner object 4028e354-2319-9031-af04-005056010e6b
2015-02-17T11:38:41.509Z cpu1:1003245085)DOM: DOMOwnerObjectDoInitRDT:1159: Object 4028e354-2319-9031-af04-0xxxxxxxb state: 5, initRDTCnt: 1, Marking leaves absent.
 
--> This is Step 2 of the Creation Workflow.
 
Using the objectID shown above, 4028e354-2319-9031-af04-005056010e6b, you can pull relevant information from vSAN Traces files. Use this vsanTraceReader command:
/usr/lib/vmware/vsan/bin/vsanTraceReader vsantracesUrgent--TIMESTAMP.gz | grep 4028e354-2319-9031-af04-00xxxxxxb

You see entries like this:

2015-02-17T11:44:01.247582 [113321423] [cpu1] [4b7b218d csn=1 lsn=2 readPolicy] DOMTrace2PCProcessResponse:3555: {'desc-32': 0x505088c8, 'state': 'PREPARING', 'protocol': 'NO_COMMIT', 'listType': 'QueriesList', 'completed': False, 'ownerUuid': '4028e354-2319-9031-af04-0xxxxxxb', 'status': 'VMK_OK'}
2015-02-17T11:44:01.247589 [113321424] [cpu1] [4b7b218d csn=1 lsn=2 readPolicy] DOMTrace2PCCompleteOperation:775: {'desc-32': 0x505088c8, 'state': 'PREPARING', 'protocol': 'NO_COMMIT', 'listType': 'QueriesList', 'completed': True, 'ownerUuid': '4028e354-2319-9031-af04-0xxxxxb', 'resultCode': 'VMK_OK', 'originalStatus': 'VMK_OK'}
2015-02-17T11:44:01.247927 [113321427] [cpu1] [4b7b218d OWNER readPolicy] DOMTraceFlowctlRecordData:683: {'op': 0x412f3725e180, 'obj': 0x412f38709940, 'objUuid': '4028e354-2319-9031-af04-005056010e6b', 'congestion': 0, 'completion time': '00:00:00.040124'}
2015-02-17T11:44:01.248015 [113321429] [cpu1] [OWNER] DOMTraceOwnerCheckObjectIsStale:45: {'obj': 0x412f38709940, 'objUuid': '4028e354-2319-9031-af04-00xxxxxxb', 'configCSN': 1, 'objCSN': 1}
2015-02-17T11:44:01.248045 [113321431] [cpu1] [4b7b218d OWNER ownerSetup] DOMTraceSetupOpCompleteTask:1192: {'op': 0x412f371f5f00, 'obj': 0x412f38709940, 'objUuid': '4028e354-2319-9031-af04-00xxxxxb', 'taskIdx': 'READ_POLICY', 'status': 'VMK_OK'}
 
To investigate further using the cmmds command, we need to know who is the DOM_OWNER as well as the DOM_CLIENT of the virtual machine, WorkFlowVM. We know the DOM_CLIENT is the ESXi host where the WorkFlowVM.vmx file is registered. Normally vSAN places the DOM_CLIENT and DOM_OWNER in the same host, but that may not always be the case.
 
Run this command on the esxi host that is the DOM_CLIENT (where the VM is registered) to find out the DOM_OWNER and other information:
 
cmmds-tool find -t DOM_NAME --format json | less
 
You see this in the output:
 
"uuid": "4028e354-2319-9031-af04-0xxxxxxxb",
"owner": "545521a8-49ff-63f1-30d9-0xxxxxxxe",
"health": "Healthy",
"revision": "0",
"type": "DOM_NAME",
"flag": "2",
"md5sum": "372c6b5614b06094e20c4c33ab5aebdd",
"valueLen": "24",
"content": {"ufn": "WorkFlowVM"},
"errorStr": "(null)"
}
 
Once we have the UUID (4028e354-2319-9031-af04-0xxxxxxx6b) of the VM, we will query the cmmds utility for objects of type DOM_OBJECT:
 
cmmds-tool find -t DOM_OBJECT --format json | less
 
You see this output:
,{
"uuid": "4028e354-2319-9031-af04-0xxxxxxxb",
"owner": "545521a8-49ff-63f1-30d9-0xxxxxxxe",
"health": "Healthy",
"revision": "5",
"type": "DOM_OBJECT",
"flag": "2",
"md5sum": "b7fbf0cb545c2cd3271c4114a227b183",
"valueLen": "1040",
"content": {"type": "Configuration", "attributes": {"CSN": 2, "addressSpace": 273804165120, "compositeUuid": "4028e354-2319-9031-af04-00xxxxxxb"}, "child-1": {"type": "RAID_1", "attributes": {}, "child-1": {"type": "Component", "attributes": {"capacity": [0, 273804165120], "addressSpace": 273804165120, "componentState": 5, "componentStateTS": 1424173121, "faultDomainId": "54550ed6-79de-4420-d3b5-0xxxxxx"}, "componentUuid": "4128e354-54e2-8490-579c-005056010e6b", "diskUuid": "526e9132-e3f0-97f6-b30a-axxxxxxx3"}, "child-2": {"type": "Component", "attributes": {"capacity": [0, 273804165120], "addressSpace": 273804165120, "componentState": 5, "componentStateTS": 1424173121, "faultDomainId": "545521a8-49ff-63f1-30d9-0xxxxxxe"}, "componentUuid": "4128e354-036f-8a90-6b50-0xxxxxb", "diskUuid": "526cc8b1-6c03-2cd6-b64e-69xxxxxx1"}}, "child-2": {"type": "Witness", "attributes": {"componentState": 5, "componentStateTS": 1424173121, "isWitness": 1, "faultDomainId": "54550cb9-cb59-c7bc-8ffd-0xxxxxxb"}, "componentUuid": "4128e354-9818-9090-a2cd-0xxxxxb", "diskUuid": "5268e5a8-51f0-dc1e-9901-f4xxxxxxxb"}},
"errorStr": "(null)"
}
 
We now know that the owner of the DOM_OBJECT of the WorkFlowVM virtual machine has the UUID 545521a8-49ff-63f1-30d9-005056010e9e.
 
To map this UUID to a hostname run this command:

cmmds-tool find -t HOSTNAME --format json | less

In the resulting output, find and take note of the UUID. This corresponds to host esxi3vsan. As you can see, the DOM_CLIENT is esxi1vsan and the DOM_OWNER is esxi3vsan:
 
,{
 
"uuid": "545521a8-49ff-63f1-30d9-0xxxxxxxe",
"owner": "545521a8-49ff-63f1-30d9-0xxxxxxxe",
"health": "Healthy",
"revision": "0",
"type": "HOSTNAME",
"flag": "2",
"md5sum": "87fdbee6042ea71f785a2836b7170efe",
"valueLen": "24",
"content": {"hostname": "esxi3vsan"},
"errorStr": "(null)"
}

 

5. Trace the creation workflow in the CLOMD log

We can see from clomd log on the host that is the DOM_CLIENT of the virtual machine (esxi1vsan):
 
less /var/log/clomd.log
 
Searching through the listing:
 
2015-02-17T11:38:40Z clomd[1000014566]: CLOMFetchDOMEvent:Read 576 bytes of 824 bytes from DOM
 
--> CLOM receives an event from DOM

2015-02-17T11:38:40Z clomd[1000014566]: I120: CLOMFetchDOMEvent:Read an additional 248 bytes from DOM

2015-02-17T11:38:40Z clomd[1000014566]: I120: CLOMExpressionToPolicy:expression: (("stripeWidth" i1) ("proportionalCapacity" (i0 i100)) ("hostFailuresToTolerate" i1) ("spbmProfileId" "b7e59bef-f95f-4576-8ee1-0xxxxxxx3") ("spbmProfileGenerationNumber" l+0))

 
--> CLOM gets the Storage Policy Chosen for the WorkFlowVM
 

2015-02-17T11:38:40Z clomd[1000014566]: I120: CLOM_PostWorkItem:Posted a work item for 4028e354-2319-9031-af04-005056010e6b Type: PLACEMENT delay 0 (Success)

2015-02-17T11:38:40Z clomd[1000014566]: W110: Object size 273804165120 bytes with policy: (("stripeWidth" i1) ("proportionalCapacity" (i0 i100)) ("hostFailuresToTolerate" i1) ("spbmProfileId" "b7e59bef-f95f-4576-8ee1-06xxxxxx3") ("spbmProfileGenerationNumber" l+0))

2015-02-17T11:38:40Z clomd[1000014566]: W110: Object size 273804165120 bytes with policy: (("readOPS" (i0 i200)) ("stripeWidth" i1) ("capacity" (l0 l273804165120)) ("proportionalCapacity" (i0 i100)) ("hostFailuresToTolerate" i1) ("availability" i99999992) ("reliabilityExponent" i30) "reliabilityBase" i100) ("affinity" [ UUID_NULL UUID_NULL UUID_NULL UUID_NULL]))

2015-02-17T11:38:40Z clomd[1000014566]: I120: CLOMChooseFromEnumeration:Cost 547608330242 sane compliant 1

2015-02-17T11:38:40Z clomd[1000014566]: I120: CLOMMarshalConfiguration:Marshaling config for UUID 4028e354-2319-9031-af04-0xxxxxxxb

2015-02-17T11:38:40Z clomd[1000014566]: I120: CLOM_LogDomMessage:referent 4028e354-2319-9031-af04-0xxxxxxxb length 1312 object size: 273804165120 type: 2

2015-02-17T11:38:40Z clomd[1000014566]: I120: CLOM_LogDomMessage:expression ("Configuration" (("addressSpace" l273804165120) ("compositeUuid" 4028e354-2319-9031-af04-005056010e6b)) ("RAID_1" () ("Component" (("capacity" (l0 l273804165120)) ("addressSpace" l273804165120) ("faultDomainId" 54550ed6-79de-4420-d3b5-0xxxxxxxc)) UUID_NULL 526e9132-e3f0-97f6-b30a-axxxxxx3) ("Component" (("capacity" (l0 l273804165120)) ("addressSpace" l273804165120) ("faultDomainId" 545521a8-49ff-63f1-30d9-0xxxxxxe)) UUID_NULL 526cc8b1-6c03-2cd6-b64e-6xxxxxxx1)) ("Witness" (("isWitness" i1) ("faultDomainId" 54550cb9-cb59-c7bc-8ffd-0xxxxxxxb)) UUID_NULL 5268e5a8-51f0-dc1e-9901-fxxxxxxb))

 
--> CLOM sends the instructions to DOM about how to create the components for the WorkflowVM.
 
--> This is Step 4 of the Creation Workflow.
 

6. Verify the Completion of the creation workflow on the DOM_OWNER

Now we need to log in to the DOM_OWNER host, esxi3vsan, to follow the operations in the /var/log/vmkernel.log file.
Use the egrep command and search for the UUID of the DOM_OWNER:
 
egrep -i '(plog|dom|lsom|vsan|rdt|cmmds|clom)' vmkernel.log |less
 
2015-02-17T11:44:01.203Z cpu1:1000014414)DOM: DOM2PC_InitSession:5553: Instantiating 2PC scanner operation for owner object 4028e354-2319-9031-af04-0ssssssb
2015-02-17T11:44:01.204Z cpu1:1000014414)DOM: DOMOwnerObjectDoInitRDT:1159: Object 4028e354-2319-9031-af04-005056010e6b state: 5, initRDTCnt: 1, Marking leaves absent.
2015-02-17T11:44:01.248Z cpu1:1000014414)DOM: DOMOwnerSetupReadPolicyCompleteTask:12294: Object 4028e354-2319-9031-af04-0xxxxxxxb starting with CSN 1 LSN 2051
 
--> DOM now knows the Policy of the VM.
 
--> This is the Step 5 of the Creation Workflow.
 
2015-02-17T11:44:01.254Z cpu1:1000014414)DOM: DOMOwnerCCPIssueChildCompleteTask:14410: Finished CCP for 4028e354-2319-9031-af04-0sssssssb : Success (0x0)
 
In this last message, you see that DOM_OWNER has successfully read the setup policy and has completed the task.
 

7. Investigate the LSOM operations with vsanTraceReader

Finally, you can check the LSOM operations. These are low level operations you review by reading the vSAN trace files on the hosts. You need to SSH to one of the vSAN node hosts and filter the trace file using the diskUuid. You can find the diskUuid by running the command, esxcli vsan storage list.
 
Use the vsanTraceReader command to read a vSAN trace file, and filter that output for the diskUuid. In this case, the diskUuid is 525cd69a-b083-660c-deb8-4568e3b1bcaa. Execute this command to read and filter the vSAN trace file:
 
/usr/lib/vmware/vsan/bin/vsanTraceReader vsantracesUrgent--2014-12-15T18h20m13s609.gz | grep 525cd69a-b083-660c-deb8-4568e3b1bcaa
 
You see this output:

2014-12-15T18:20:07.099791 [65] [cpu1] [DISK] DOMTraceObjectSetState:2873: {'obj': 0x412ed052d640, 'objUuid': '525cd69a-b083-660c-deb8-4xxxxxxxxxa', 'curState': 'initialize', 'prevState': 'none'}
2014-12-15T18:20:07.099798 [66] [cpu1] [6e847a3a DISK] DOMTraceObjectInitObject:311: {'obj': 0x412ed052d640, 'objUuid': '525cd69a-b083-660c-deb8-4xxxxxxxxxa', 'status': 'VMK_OK'}
2014-12-15T18:20:07.099799 [67] [cpu1] [6e847a3a DISK] DOMTraceObjectInitRootObjectComplete:906: {'objUuid': '525cd69a-b083-660c-deb8-4xxxxxxxxxa', 'obj': 0x412ed052d640, 'status': 'VMK_OK'}
2014-12-15T18:20:07.099800 [68] [cpu1] [6e847a3a DISK ROOT_OBJ_FACTORY] DOMTraceRootObjFactoryInitObj:195: {'newObjUuid': '525cd69a-b083-660c-deb8-4xxxxxxxxxa', 'newAssoc': 0x0, 'newObj': 0x412ed052d640, 'returnStatus': 'VMK_OK', 'opStatus': 'VMK_OK'}
2014-12-15T18:20:07.099805 [69] [cpu1] [6e847a3a DISK ROOT_OBJ_FACTORY] DOMTraceRootObjFactoryDispatchCompleted:575: {'obj': 0x0, 'newObjUuid': '525cd69a-b083-660c-deb8-4xxxxxxxxxa', 'newAssoc': 0x0, 'status': 'VMK_OK', 'factoryOpState': 'CALL_CALLBACK', 'newObjType': 'DISK'}
2014-12-15T18:20:07.099821 [70] [cpu1] [6e847a39 DOM_EVENT] DOMTraceEventHandleDiskArrival:714: {'baseOp': 0x412ed021e280, 'diskType': 0, 'arrivalOpState': 'PUBLISH_DISK_HEALTH', 'errorCode': 'VMK_OK', 'diskUUID': '525cd69a-b083-660c-deb8-4xxxxxxxxxa'}
2014-12-15T18:20:07.099833 [71] [cpu1] [6e847a3b DOM_EVENT] DOMTraceEventHandleHealth:372: {'baseOp': 0x412ed021dec0, 'healthOpState': 'FIRST', 'errorCode': 'VMK_OK', 'uuid': '525cd69a-b083-660c-deb8-4xxxxxxxxxa', 'healthFlags': 0x0}
2014-12-15T18:20:07.422317 [94] [cpu1] [6e847a3b DOM_EVENT] DOMTraceEventHandleHealth:372: {'baseOp': 0x412ed021dec0, 'healthOpState': 'DONE', 'errorCode': 'VMK_OK', 'uuid': '525cd69a-b083-660c-deb8-4xxxxxxxxxa', 'healthFlags': 0x0}
2014-12-15T18:20:07.422342 [96] [cpu1] [6e847a39 DOM_EVENT] DOMTraceEventHandleDiskArrival:822: {'baseOp': 0x412ed021e280, 'diskType': 0, 'arrivalOpState': 'WAIT_FOR_DISK_STATUS', 'errorCode': 'VMK_OK', 'diskUUID': '525cd69a-b083-660c-deb8-4xxxxxxxxxa'}
2014-12-15T18:20:07.422349 [98] [cpu1] [6e847a39 DOM_EVENT] DOMTraceEventHandleDiskArrival:1948: {'baseOp': 0x412ed021e280, 'diskType': 0, 'arrivalOpState': 'WAIT_FOR_DISK_STATUS', 'errorCode': 'VMK_OK', 'diskUUID': '525cd69a-b083-660c-deb8-4xxxxxxxxxa'}
2014-12-15T18:20:07.547727 [110] [cpu1] [410b4ded09b0] CMMDSTraceInitUpdate:707: {'updateType': 'Add', 'type': 4, 'revision': 0, 'srcUUID': '54550cb9-cb59-c7bc-8ffd-005056010e6b', 'uuid': '525cd69a-b083-660c-deb8-4xxxxxxxxxa'}
2014-12-15T18:20:07.547736 [111] [cpu1] [6e847a39 DOM_EVENT] DOMTraceEventHandleDiskArrival:1948: {'baseOp': 0x412ed021e280, 'diskType': 0, 'arrivalOpState': 'WAIT_FOR_DISK_STATUS', 'errorCode': 'VMK_OK', 'diskUUID': '525cd69a-b083-660c-deb8-4xxxxxxxxxa'}
2014-12-15T18:20:07.647857 [113] [cpu1] [6e847a39 DOM_EVENT] DOMTraceEventHandleDiskArrival:1948: {'baseOp': 0x412ed021e280, 'diskType': 0, 'arrivalOpState': 'WAIT_FOR_DISK_STATUS', 'errorCode':
2014-12-15T18:20:08.365682 [127] [cpu0] [6e847a39 DOM_EVENT] DOMTraceEventHandleDiskArrival:1948: {'baseOp': 0x412ed021e280, 'diskType': 0, 'arrivalOpState': 'WAIT_FOR_DISK_STATUS', 'errorCode': 'VMK_OK', 'diskUUID': '525cd69a-b083-660c-deb8-4xxxxxxxxxa'}
2014-12-15T18:20:08.466445 [139] [cpu1] [6e847a39 DOM_EVENT] DOMTraceEventHandleDiskArrival:1649: {'baseOp': 0x412ed021e280, 'diskType': 0, 'arrivalOpState': 'INSTANTIATE_COMPONENTS', 'errorCode': 'VMK_OK', 'diskUUID': '525cd69a-b083-660c-deb8-4xxxxxxxxxa'}
2014-12-15T18:20:08.466582 [145] [cpu0] [6e847a39 COMP DOM_EVENT] DOMTraceEventHandleComponentArrival:3162: {'errorCode': 'VMK_OK', 'diskUUID': '525cd69a-b083-660c-deb8-4xxxxxxxxxa', 'compUUID': '18018b54-e0ef-c1dc-b196-0xxxxxxxxxxe'}
2014-12-15T18:20:08.466656 [168] [cpu0] [6e847a39 DOM_EVENT] DOMTraceEventHandleDiskArrival:2055: {'baseOp': 0x412ed021e280, 'diskType': 0, 'arrivalOpState': 'DONE', 'errorCode': 'VMK_OK', 'diskUUID': '525cd69a-b083-660c-deb8-4xxxxxxxxxa'}

Review the output and note the values associated with the "errorCode" entries.
 

8. Summary

This workflow covers all the steps involved when creating a VM on the vSAN Datastore.
 
When troubleshooting, follow this workflow, examine the logs, and identify in which step does the process fail.