This Article includes content, links and a best practices guide that covers Microsoft SQL Server database best practices. Note that all of the guidelines/recommendations MUST be discussed with a DBA first before taking any action.
Note: that all of the proposed guidelines/recommendations included in this Article and document MUST be discussed with a DBA first before taking any action/making any changes to the database.
Why the SQL Server Transaction Log Grows Even in Simple Recovery Model (Disk Filling Up)
SQL Server always uses the log for data modifications, regardless of recovery model used. The recovery model just affects what happens after the transaction completes.
From the following MSDN article:
We highy recommend that you separate the LDF (transaction log file) file from the MDF (data) file. If they are on the same drive, they contend for service. This slows down overall performance particularly for inserts and updates which is central to data_engine and DB operations in general.
"Log truncation frees any virtual logs whose records all appear in front of the minimum recovery log sequence number (MinLSN). The MinLSN is the log sequence number of the oldest log record that is required for a successful database-wide rollback. The log may grow very large because the log cannot be truncated past the MinLSN. This happens even if the database is using the simple recovery model, in which the transaction log is normally truncated on each automatic checkpoint."
One of the things you may want to consider in this scenario is either increasing the disk space or shrinking the file using either "DBCC shrinkfile" or via the SQL Enterprise Studio Admin GUI.
Recovery Modes (Simple versus Full)
SQL Server offers Full versus Simple recovery modes. The quick difference is that in Simple recovery mode, the transaction log reuses the space after each checkpoint (upon write-to-disk completion and/or the database's internal system checkpoint) and there is no need for backing up the transaction log file for recovery purposes.
With Full recovery mode, the transaction log cannot be reused unless backed up and it is used in recovery. So, Simple recovery mode still makes use of the transaction log, it bloats it for large or long running transactions and space is then reclaimed after the checkpoint.
For customers with sizable UIM databases, the transaction log can become bloated during database maintenance (via deletes and re-indexing) but it normally returns to normal at the end of the maintenance completion.
Example errors from the data_engine log:
Jan 4 02:49:44:408  de: ExecuteNoRecords - Query: TRUNCATE TABLE D_QOS_PROBES
Jan 4 02:49:44:408  de: [QoSData] ExecuteNoRecords - 1 errors
Jan 4 02:49:44:408  de: (1) ExecuteNoRecords [Microsoft OLE DB Provider for SQL Server] The transaction log for database 'CA_UIM' is full due to 'LOG_BACKUP'.
Jan 4 02:49:44:409  de: COM Error [0x80040e14] IDispatch error #3092 - [Microsoft OLE DB Provider for SQL Server] The transaction log for database 'NimsoftSLM' is full due to 'LOG_BACKUP'.
Jan 4 02:49:44:415  de: qos_check - InitializeOnce failed ...
Jan 4 02:49:44:424  de: [LSV] has disconnected from database
Jan 4 02:49:44:424  de: Database_global_lock LOCK
This can be due to the transaction logging set to archive mode instead of a circular method. If you need to stay with an archive logging method, then we suggest separating the logs onto their own dedicated disk and archive them to another format. Obviously you would need to talk to your DBA about what would be needed to restructure the database server.
- Check your Recovery model and Transaction Log...If you are using FULL recovery model, are you regularly backing up and truncating the transaction log so that it does not grow too large?
Some things you can do to relieve strain on a bad situation is as follows...
1 - Instead of waiting for DB to auto-grow, increase the data file size so that there is a lot of extra DB space available.
2 - Assuming you are running with a Full recovery Model which means all data is being written to the transaction log file and is not emptied until committed to the DB. This will create very large log files and affect performance.
3 - Follow all of the recommendations found in the NISDatabaseBestPracticesGuidev11.pdf document found in the TEC000003224.zip file in the File Attached section below.
Note that this pdf is somewhat dated, but there is solid info on best practices in Chapters 4-5, and information regarding DB performance analysis in Chapter 6.
4 - Backup/truncate the transaction log
5 - Or switch to Simple Recovery mode
- UIM 20.x or later
What is the purpose of backing up SQL transaction logs?
The transaction log is a serial record of all the transactions that have been performed against the database since the transaction log was last backed up. With transaction log backups, you can recover the database to a specific point in time or to the point of failure.
Do I need to backup the transaction log?
If a database uses either the Full or bulk-logged recovery model, you must back up the transaction log regularly enough to protect your data, and to prevent the transaction log from filling. This truncates the log and supports restoring the database to a specific point in time.
If your data_engine setting 'delete raw samples' is set too high, e.g., 365 days you may consider gradually cutting back on the size of your database as there may be too much data for the data_engine maintenance to complete in a timely fashion and not cause DB problems. Normally customers save 90 days or so of raw data unless there is some business reason to save more data. Due to this setting data_engine maintenance can take taking a long time.
Partitioning of Raw Sample Data (SQL Server)
Important! When using the Partitioning feature, schedule maintenance to run daily. The time required to execute the partitioning depends on the amount of data as well as the performance of the disk subsystem but can for large installations take several hours or even up to several days.
The sample data tables can be partitioned in order to achieve improved performance.
The sample data will be partitioned by day so if you for instance have configured the system to delete raw sample data older than 365 days, then the sample data tables (RN_QOS_DATA_xxxx) will each be configured with 365 partitions (plus a few extra partitions in order allow for faster maintenance).
SQL Server: If using partitioning then the property Delete raw data older than must be between 1 and 900. SQL Server, up to and including 2008 SP1, limits a table to 1000 partitions.
Partitioning will contribute to improved performance and faster more efficient maintenance when accessing the raw sample data tables:
The data_engine runs a query to determine the MS SQL Server edition and if applicable enables the partition option in the data_engine.
DBAs can also examine the transaction log and/or use SQL profiler to see which transactions are filling up the log.
How to manually partition the UIM database for Microsoft SQL Server