Gprestore Fails to Download Files When Restoring from S3-Compatible Storage
search cancel

Gprestore Fails to Download Files When Restoring from S3-Compatible Storage

book

Article ID: 403749

calendar_today

Updated On:

Products

VMware Tanzu Greenplum / Gemfire

Issue/Introduction

When using gprestore to restore data from an S3-compatible storage, the process may fail with the following error:

20250701:15:30:07 gprestore:gpadmin:xxx:792348-[DEBUG]:-Running plugin setup for restore on segments
20250701:16:28:07 gprestore:gpadmin:xxx:792348-[CRITICAL]:-exit status 1: 20250701:16:28:07 gpbackup_s3_plugin:gpadmin:gdm:793738-[ERROR]:-Error while downloading test1/backups3/backups/20250701/20250701152545/gpbackup_20250701152545_metadata.sql: unexpected EOF
github.gwd.broadcom.net/TNZ/gp-common-go-libs/gplog.FatalOnError
        /tmp/build/16b62eb3/gpbackup_src/vendor/github.gwd.broadcom.net/TNZ/gp-common-go-libs/gplog/gplog.go:481
github.gwd.broadcom.net/TNZ/gp-backup/utils.(*PluginConfig).MustRestoreFile
...

Further investigation of the plugin log (gpbackup_s3_plugin_yyyymmdd.log) shows a failure to complete the file download:

20250701:15:30:07 gpbackup_s3_plugin:gpadmin:xxx:793738-[DEBUG]:-File gpbackup_20250701152545_metadata.sql size = 457875949 bytes
20250701:16:28:07 gpbackup_s3_plugin:gpadmin:xxx:793738-[ERROR]:-Error while downloading test1/backups3/backups/20250701/20250701152545/gpbackup_20250701152545_metadata.sql: unexpected EOF

 

Cause

Root Cause

The error unexpected EOF typically indicates that the download was interrupted, possibly due to a network issue or timeout when fetching large files from S3.

Resolution

Step 1: Verify Network Connectivity

Ensure that the Greenplum Database (GPDB) host has stable network connectivity with the S3-compatible storage. You can test access using tools such as s3cmd.

Install s3cmd:

# sudo yum install -y epel-release
# sudo yum install -y s3cmd

# cat ~/.s3cfg
host_base = <S3_endpoint>:<port>
host_bucket = <bucket_name>
bucket_location = eu-west-3
use_https = no
access_key = <access_key>
secret_key = <secret_key>

# s3cmd ls
# s3cmd get s3://test1/backups3/backups/20250701/20250701152545/gpbackup_20250701152545_metadata.sql /tmp/

If the file can be downloaded successfully using s3cmd, proceed to Step 2.

Step 2: Reduce the Multipart Chunk Size

If there are no network issues, try reducing the restore_multipart_chunksize used during restore. The default chunk size is 500MB, which may be too large for certain network environments.
You can lower this value to 25MB to help avoid download timeouts.

For instructions on how to configure restore_multipart_chunksize, please refer to the official documentation:

Managing the S3 Plugin – Greenplum Backup and Restore

If the issue persists after applying the above workarounds, please collect the logs and network test results, then contact support for further assistance.