When deploying and staging a large amount of applications in Tanzu Application Service for VMs (TAS for VMs), it reports the error "Failed to upload payload for droplet
" but provides no further details.
Failed to upload payload for droplet Uploading failed Cell 33170a20-e7da-4871-bc38-e1b3d7194eb3 stopping instance 11db4cd1-688f-4157-aa62-dd8194df6197 Cell 33170a20-e7da-4871-bc38-e1b3d7194eb3 destroying container for instance 11db4cd1-688f-4157-aa62-dd8194df6197 Cell 33170a20-e7da-4871-bc38-e1b3d7194eb3 successfully destroyed container for instance 11db4cd1-688f-4157-aa62-dd8194df6197 Error staging application: Staging error: staging failedFAILED
A different variation of the logs does not show the error "Failed to upload payload for droplet
", but simply the staging error after the start of the droplet upload (which does not complete):
Uploading droplet, build artifacts cache... Uploading droplet... Uploading build artifacts cache... Uploaded build artifacts cache (180.3M) FAILED Error staging application: Staging error: staging failed Cleaning up file based variables 00:01 ERROR: Job failed: exit code 1
Note: The preceding log excerpts are only examples. Date, time, and environmental variables may vary depending on your environment.
1. Collect the logs from the Diego Cell. In this case, the GUID is 33170a20-e7da-4871-bc38-e1b3d7194eb3.
2. Review /var/vcap/sys/log/rep/rep.stdout.log
, Narrow down logs scope with instance GUID and action step, and locate the error message.
In this example, the instance GUID is 11db4cd1-688f-4157-aa62-dd8194df6197
. The action step is rep.executing-container-operation.task-processor.run-container.containerstore-run.node-run.action.upload-step
.
{"timestamp":"2019-12-19T10:01:51.197373273Z","level":"error","source":"rep","message":"rep.executing-container-operation.task-processor.run-container.containerstore-run.node-run.action.upload-step.failed-to-upload","data":{"container-guid":"11db4cd1-688f-4157-aa62-dd8194df6197","container-state":"reserved","error":"Upload failed: Status code 413","from":"/tmp/droplet","guid":"05f95ee5-99b9-4a13-a630-238abc8efb9a","session":"266731.1.3.3.1.2.5"}}
Note: Status code 413
indicates HTTP status code 413 - Payload Too Large
.
In addition, in /var/vcap/sys/log/cloud_controller_ng/nginx-error.log
on Cloud Controller (CC) VMs, check if nginx
reports a "too large body
" error.
2019/12/19 10:01:51 [error] 9#0: *3619897 client intended to send too large body: 1213716822 bytes, client:
3. If you're seeing the conditions above, then you need to adjust the following settings in Ops Manager.
When using the internal VMware Tanzu Application Service (TAS) for VMs blobstore, Diego Cells stage apps and upload droplets to blobstore through Cloud Controller. The max file that can be uploaded is configurable at Ops Manager > TAS for VMs > Application Developer Control > Maximum File Upload Size (MB), the default is 1024MB
. This limit is also applicable to app packages from `cf push
`.
Aside from increasing the allowed upload size limit, we recommend reducing application package or droplet size by doing the following: