Broadcom API Gateway - MySQL ERROR "Operating system error number 28"
search cancel

Broadcom API Gateway - MySQL ERROR "Operating system error number 28"

book

Article ID: 220200

calendar_today

Updated On:

Products

CA API Gateway

Issue/Introduction

MySQL log show warning/error like the following example:

[Warning] InnoDB: 1048576 bytes should have been written. Only 839680 bytes written. Retrying for the remaining bytes.
[Warning] InnoDB: Retry attempts for writing partial data failed.
[ERROR] InnoDB: Write to file (merge)failed at offset 1701838848, 1048576 bytes should have been written, only 839680 were written. Operating system error number 28. Check that your OS and file system support files of this size. Check also that the disk is not full or a disk quota exceeded.
[ERROR] InnoDB: Error number 28 means 'No space left on device'

Environment

Broadcom API Gateway 9.x/10.x - MySQL 5.x/ 8.x

Cause

MySQL Server is instructed to detect and react to I/O errors that are rose by the underlying Operating System. 

In fact Error Number 28 is one of those error that MySQL server specifically retrieves from the OS, which states that the system ran out of space. 

This can lead to an outage where a proactive reaction is taken by the InnoDB Storage Engine which may exist to avoid data corruption.  

Resolution

There are several scenarios that need to be considered in order to find out what is causing a shortage of available disk space. 

- One of multiple partitions are out of free disk space. This is has not to be necessarily the main /var/lib/mysql/ partition but also any other partition in which MySQL may need to write (logs, temporary files, pid, etc...), example temp (/tmp) or root (/) partition. A "df -h" OS command should quickly highlight which partition(s) is full. 

- A large amount of files is currently opened on the system, causing INODES to be 100% full. A "df -i" command will output INODES current usage on the system.

- You have ran out of the disk quota for the user ID under which MySQL server is running 

- MySQL server had created lots of temporary tables, which used the free disk space, but were deleted due to a crash

- For Virtual appliances, the virtual disk has been provisioned as "Thin" rather than recommended "Thick Provision Lazy Zeroed" option. A virtual disk provisioned as "Thick" is optimal for performance as the virtual disk size is entirely allocated on ESX/VMware designated storage. Instead a virtual disk provisioned as "Thin" is a sort of elastic/expandable disk that increase under the virtual server demand. That could lead to to an issue where the problem may occur right before more disk space is claimed (as full) and the underlying hypervisor provides it.