vRA upgrade fails in vRSLCM with package download error
search cancel

vRA upgrade fails in vRSLCM with package download error

book

Article ID: 345969

calendar_today

Updated On:

Products

VMware Aria Suite

Issue/Introduction

Symptoms:

  • vRealize Automation upgrade fails.

  • You see the following messages in the vami.log located in /opt/vmware/var/log/vami/

10/01/2023 15:57:39 [DEBUG] Open URL complete: https://vrslcm.corp.local/repo/productBinariesRepo/vra/8.10.1/upgrade/update/package-pool/prelude-layer-9d8836cb4a86ccdd4dee161d5debfa047ea3939b8fca3c2f0003b18d724fc810-1-1.noarch.rpm
10/01/2023 15:57:39 [ERROR] Invalid file size. url=/package-pool/prelude-layer-9d8836cb4a86ccdd4dee161d5debfa047ea3939b8fca3c2f0003b18d724fc810-1-1.noarch.rpm, downloadsize=1842309062, filesize=2905836008
10/01/2023 15:57:39 [DEBUG] Deleting file: /tmp/vmware_update_download_w5HBXX
10/01/2023 15:57:39 [ERROR] Setting job error information. jobid=20, errorCode=9, errorString=Error during package download. Please try again.

  • Other messages can be similar to:

03/02/2023 22:06:01 [ERROR] CURL error: transfer closed with outstanding read data remaining URL: https://vrslcm.corp.local/repo/productBinariesRepo/vra/8.9.1/upgrade/update/package-pool/prelude-layer-7683e310fb9ab017be7ac1c5234f39dcfa7f6387c614c1f50b46c959060d8807-1-1.noarch.rpm
03/02/2023 22:06:01 [ERROR] Sleep 10 seconds and retry.
03/02/2023 22:07:23 [ERROR] CURL error: transfer closed with outstanding read data remaining URL: https://vrslcm.corp.local/repo/productBinariesRepo/vra/8.9.1/upgrade/update/package-pool/prelude-layer-7683e310fb9ab017be7ac1c5234f39dcfa7f6387c614c1f50b46c959060d8807-1-1.noarch.rpm

Note: You can also encounter issue while downloading vRA support bundle from vRSLCM where support bundle downloaded is either corrupted or downloaded partially.

 

Environment

VMware vRealize Automation 8.8.x

Cause

This issue has been observed on the network where there is slow bandwidth. The download of RPM files from vRSLCM to vRA takes time to download and finally the connection is closed by vRSLCM. Due to this, there is partial download failure in vRA.

The download of RPM's from vRSLCM is done over nginx proxy service which runs inside vRSLCM. With existing nginx configuration settings, it does not work on slow network. If the rate at which data is transferred to vRA is more than the rate at which data is processed by vRA, the connection is closed by vRSLCM.

Resolution

We need to add additional configuration settings in Nginx configuration file to make it work on a slow network. Follow the below steps:

  1. Take a snapshot of vRSLCM appliance.
  2. SSH to vRSLCM appliance.
  3. Edit this file: /etc/nginx/nginx.conf
  4. Replace this section:
location /repo/ {
      proxy_pass;
    }
with this:
location /repo/ {
      proxy_pass;
      proxy_max_temp_file_size 4096m;
    }
  1. Save the file and verify configurations are correct using the command:
nginx -t

It should give the output as below:

root@vrslcm [ /etc/nginx ]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
  1. Restart the nginx service using the command:
systemctl restart nginx

 

Even after making the above changes, in some environments where network bandwidth is very slow, the download of RPM may still fail with the following error in /var/log/nginx/error.log in vRSLCM appliance:


2022/10/13 09:52:56 [crit] 905#0: *11090 pwritev() "/etc/nginx/proxy_temp/7/52/0000000527" failed (28: No space left on device) while reading upstream, client: 10.1.151.90, server: vrslcm.corp.local, request: "GET /repo/productBinariesRepo/vra/8.8.2/upgrade/update/package-pool/prelude-layer-02ca190b7dcf81a80b
63a7f7a80eff4aefc355627bb5a038a0d158ef001cf77e-1-1.noarch.rpm HTTP/1.1", upstream: "http://127.0.0.1:8080/repo/productBinariesRepo/vra/8.8.2/upgrade/update/package-pool/prelude-layer-02ca190b7dcf81a80b63a7f7a80eff4aefc355627bb5a038a0d158ef001cf77e-1-1.noarch.rpm", host: "vrslcm.corp.local"

The default location for storing buffer data for nginx is /etc/nginx/proxy_temp which resides inside / partition which is of size 8GB in vRSLCM. There is no documented way to extend this partition. Hence, we can override this default buffer location by following below steps:

  1. Take snapshot of vRSLCM appliance.
  2. SSH to vRSLCM appliance.
  3. Edit this file: /etc/nginx/nginx.conf
  4. Replace this section:

location /repo/ {
      proxy_pass;
    }

with this:

location /repo/ {
      proxy_pass;
      proxy_max_temp_file_size 4096m;
      proxy_temp_path /data/temp/;
    }

  1. Save the file and verify configurations are correct using the command:
nginx -t

It should give the output as below:

root@vrslcm [ /etc/nginx ]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
  1. Restart the nginx service using the command:
systemctl restart nginx

As part of the steps mentioned above, we have just added this new entry in the /etc/nginx/nginx.conf file : "proxy_temp_path /data/temp/;"

After performing the above steps, you can retry vRA upgrade from vRSLCM.