How to troubleshoot and resolve cf push app issue due to running out of blobstore space
search cancel

How to troubleshoot and resolve cf push app issue due to running out of blobstore space

book

Article ID: 383657

calendar_today

Updated On:

Products

VMware Tanzu Application Service VMware Tanzu Application Service VMware Tanzu Application Service for VMs VMware Tanzu Application Platform per vCPU VMware Tanzu Application Platform VMware Tanzu Application Platform SM

Issue/Introduction

User deploying an application with  'cf push example-app',  and it failed to start the app with the follow errors:

 

2024-12-02T11:42:02.487-07:00 [STG/0] [OUT] Exit status 0

2024-12-02T11:42:02.487-07:00 [STG/0] [OUT] Uploading droplet, build artifacts cache...

2024-12-02T11:42:02.487-07:00 [STG/0] [OUT] Uploading droplet...

2024-12-02T11:42:02.487-07:00 [STG/0] [OUT] Uploading build artifacts cache...

2024-12-02T11:42:02.548-07:00 [STG/0] [OUT] Uploaded build artifacts cache (129B)

2024-12-02T11:42:04.152-07:00 [API/11] [OUT] Creating droplet for app with guid #######-####-####-####-########

2024-12-02T11:42:30.091-07:00 [API/27] [OUT] Creating droplet for app with guid #######-####-####-####-########

2024-12-02T11:42:55.136-07:00 [API/26] [OUT] Creating droplet for app with guid #######-####-####-####-########

2024-12-02T11:43:18.400-07:00 [STG/0] [ERR] Failed to upload payload for droplet

2024-12-02T11:43:18.427-07:00 [STG/0] [ERR] Uploading failed

2024-12-02T11:43:18.701-07:00 [STG/0] [OUT] Cell #######-####-####-####-######## stopping instance #######-####-####-####-########

2024-12-02T11:43:18.701-07:00 [STG/0] [OUT] Cell #######-####-####-####-######## destroying container for instance #######-####-####-####-########

2024-12-02T11:43:18.767-07:00 [API/27] [ERR] Failed to stage build: staging failed

Environment

Tanzu Application Service

Cause

There can be many reasons for the app deployment failures.  Note that this error message 'Failed to upload payload for droplet' in the log output message indicating the diego cell failed to upload the app droplet to the blobstore during staging.

The next step is to check the blobstore status using 'bosh vms --vitals' command,  and it shows the foundation is using internal mini server blobstore.   

Deployment 'minio-internal-blobstore-#######'

Instance                             Process   Persistent 

                                           Disk Usage 

minio-server/#######-####-####-####-######## running   95% (1i%)  

minio-server/#######-####-####-####-########  running   95% (1i%)  

minio-server/#######-####-####-####-########  running   95% (1i%)  

minio-server/#######-####-####-####-########  running   95% (1i%)  

minio-server/#######-####-####-####-########  running   95% (1i%)  

 

Bosh ssh into a mino-server instance and check the disk usage

minio-server/########-####-######-##########:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        7.9G     0  7.9G   0% /dev
tmpfs           7.9G     0  7.9G   0% /dev/shm
tmpfs           7.9G  807M  7.1G  11% /run
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           7.9G     0  7.9G   0% /sys/fs/cgroup
/dev/sda1       2.9G  1.4G  1.4G  51% /
/dev/sdb2        16G  282M   15G   2% /var/vcap/data
tmpfs            16M   52K   16M   1% /var/vcap/data/sys/run
/dev/sdd1       4.0T  1.8T  2.1T  46% /var/vcap/store_migration_target
/dev/sdc1       2.0T  1.9T  582M 100% /var/vcap/store

You can see there is problem with the minio server instances running out of persistent disk space at 95% usage capacity and /var/vcap/store at 100% use.  The staging diego cell failed to upload the droplet to the blobstore due to no available disk space.

Here are few possible reasons for using up or running out of blobstore space:

  • Increase in large deployment require more blobstore
  • Problem with automatic blob clean up failing to delete files or something else goes wrong.
  • The buildpacks generated and cached too many large files during staging can quickly fill up the blobstore, and require a manual blob cleanup.

Note that the Cloud Controller does not track resource cache and buildpack cache blob-types in its database so they don't get cleaned up automatically.

Resolution

Free up the blobstore disk space with manual clean up the buildpack cache using blobstore API with cf cli command as admin user: 

cf curl -X DELETE /v2/blobstores/builpack_cache

 

Allow the command a few minutes to clean up the buildpack cache, and check the mini server instances persistance disk usage  with 'bosh vms --vitals' .

 

Deployment 'minio-internal-blobstore-#######'

Instance                             Process   Persistent 

                                           Disk Usage 

minio-server/#######-####-####-####-########  running   60% (1i%)  

minio-server/#######-####-####-####-########  running   60% (1i%)  

...

 

It reporting the disk usage drop from 95% to  60%, and running cf push app completed with no issue.

 

Additionally, you can try free up more blobstore space by manually cleaning up the resource cache as follows:

 

  • For Internal blobstore:
    • Run bosh ssh to connect to the blobstore VM (NFS or WebDav) and rm * the contents of the /var/vcap/store/shared/cc-resources directory.  Note that minio server, cc-resources directory locate under /var/vcap/store different directory from NFS or WeDAV shared directory name.

 

  • For External blobstore:
    • Use the file store’s API to delete the contents of the resources bucket.

 

Warning:   Do not manually delete app package, buildpack, or droplet blobs from the blobstore.