Tackling Large Tables to Improve MySQL Database Performance

Oftentimes, database professionals make the mistake of jumping to conclusions when trying to improve MySQL database performance. They assume that the database must be the reason why the application has slowed down. 

In most cases, they may be right- which is why it’s important to start looking for possible bottlenecks and removing them to reduce lag. However, make sure you consider multiple forms of diagnostic data when attempting to uncover the root cause behind poor MySQL database performance. Don’t stick to just monitoring CPU usage or disk IO as relying on a single metric has greater chances of leading you to an incorrect diagnosis.

We need to look at the full picture to understand the complex interdependencies among CPU, memory, and IO. It is important to do so before making reactive changes, such as increasing disk capacity or memory. In this blog, we will take a look at one such reason behind performance bottlenecks- large data volumes.

How Large Data Volumes Affect MySQL Database Performance

Statements that cover a wide scope of data or are unrefined may fetch unreasonably large quantities of information from the database. This doesn’t seem like a problem at first when the database is new and has minimal data.

The true issue emerges as it grows in size, gradually leading to the requirement of Database Server. This is because when a statement fetches data, the data must be scanned into memory. The bigger the size of the data that needs scanning, the greater the load on the CPU, resulting in the need for burst mode due to sudden CPU spikes. This kind of usage increases the chances of your database server crashing.

Additionally, in case the data does make it from the database server, your app server may not be sufficiently provisioned to handle it. Known as over-fetching, you can overcome this problem by limiting the scope of data selection to relevant records. One way to do that is to opt for the WHERE clause in such queries- after you find them, of course.

The key to locating them is by searching through the database logs and metrics for tell-tale signs of large-scale data fetching. Although you might be able to spot CPU spikes or burst credit utilization from these metrics, it might not be easy to tell which statements are causing this specifically.

Things You Can Do to Improve MySQL Database Performance

Query optimization is one of the best places to begin when you have to improve MySQL database performance. But it differs from case to case and is far from a one-size-fits-all endeavor. That said, there are certain tasks that help in a lot of cases:

  • As mentioned above, you can prevent large result sets and decrease data volume by limiting the search to relevant records using the WHERE clause.
  • Go through the database schema to uncover ways that decrease complexity. For instance, keep an eye out on queries that contain a lot of joins since they take more time than most queries. You can make them run faster by reducing their relationships.
  • A large number of queries also fetch unnecessary fields from tables. You can set them to return only those fields that are important to keep from over-fetching again.
  • Views can help in some, but not all cases. A view is similar to a table that you can create beforehand by executing a statement to predetermine values that may require on-the-spot calculation otherwise.
  • Change the syntax of the SQL to influence database SQL optimizer to generate a better query plan.

Conclusion

If your application is performing poorly, the problem often lies with the database, with inefficient queries. While there isn’t any solution that works for every single query out there, database experts can hone in on the ones that require optimization using diligent analysis and monitoring, along with the right SQL optimizer tool for sql server. After they successfully find the queries behind slow database performance, all they have to do is take the right steps to resolve this issue. These include optimization techniques, such as adding indexes, editing out unnecessary fields, and inserting the WHERE clause wherever necessary.

Transferring Data in SQL Server with an Eye on Performance

improve performance of sql query

A lot of database professionals often need to archive older data in SQL Server by transferring it from one table to another. There are multiple ways to achieve the transfer, the most useful of which we will discuss in this blog. We will also provide tips to ensure the performance of the database doesn’t get affected as these approaches are carried out.

Different Methods to Move Data from One Table to Another

Consider the following techniques that various DBAs take when they have to take data from a table to add to another table along with some ways to improve performance of SQL query while using them:

  1. Insert data with the INSERT INTO command – The INSERT INTO query is one of the basic methods of moving data from table 1 to table 2. You can help decrease the time it takes to enter information using this method. If the database is running under the full recovery model, just change it to the bulk-logged model. Doing this saves execution time as it skips over complete logging of bulk operations. The following query should help with this:

ALTER DATABASE <database name> SET RECOVERY <BULK_LOGGED>

Once you switch to the bulk-logged recovery model, you will have to use a truncate statement to flush table 2 (destination). You can carry out the same script you were using to transfer data after this.

  1. Use the SELECT INTO query – Using the SELECT INTO rather than the INSERT INTO command can prove useful in some cases. However, the benefits are significant when the recovery model is bulk-logged due to the reason mentioned above. Although users lack the ability to place the data in an existing table, SQL Server brought with it a feature to make things easier. It essentially enables them to pick the filegroup where they want to create a table.
  1. INSERT INTO query + Tab lock hint – Using both in combination has been known to provide better database performance. To achieve this, you will have to use TABLOCK for table 2. If the destination table is without a clustered index or other constraints, that data will remain as a heap. It helps to use the TABLOCK hint for the destination table during data insertion into a heap using the INSERT INTO statement. Doing this enhances query logging and locking since a shared lock is placed on the whole table rather than every row or page.
  2. Adding data using the SWITCH TO query – You can also try moving the data with the help of the SWITCH TO command. Although this query typically finds its use while transferring information between partitions among separate tables, it can help here as well. How? By moving data from one partition to the next using the ALTER TABLE command. If there are no allocated partitions, the data will transfer through tables instead. Before you begin data insertion, make sure you disable any constraints or indexes that exist on the table. It is better to enable constraints and rebuild indexes after insertion from a performance perspective.

Tips for Enhancing Performance During Data Transfer and Insertion

  • Reduce IO lag – Latency can negatively impact the process of writing database files on disk. You can decrease latency and bottlenecks using SSD drives that are comparatively better than SATA or SCSI drives.
  • Maintain Robust Server Infrastructure – The system needs to be properly built to ensure competent performance for various database operations. The greater the pressure on the resources, the greater the effect on performance.
  • Follow ACID Properties – ACID properties make sure each transaction contains certain properties when it gets processed. In the case of data insertion, the isolation factor is also important to consider because the values have another source. Here, the statements should contain the suitable isolation level to maintain integrity within the database.
  • Database Settings – One of the best ways to achieve improved outcomes is to maintain the right database configuration. This is because the settings can have a significant effect on performance. For instance, the location of the database files on the disk along with TempDB settings.

These are the various ways in which you can gain better performance at the query, trace, and constraint levels along with additions that can improve the execution of insert operations.

SQL Server: Knowing How Heaps and Clustered Indexes Work

MySQL database and sql

Heaps and clustered indexes are two different ways of storing data in SQL Server. Both have their advantages and disadvantages, and we will discuss them in this post.

A Bit about Heaps

Heaps are essentially piles of data that remain unsorted or unorganized, hence the name. Although you can find heaps on tables that don’t have clustered indexes, they may also be present with non-clustered indexes. Heaps provide the benefit of increased input speed which helps while adding data to a table. Data insertion is quicker because the process doesn’t require a logical order to do so.

A Bit about Clustered Indexes

A clustered index is a more organised way of data insertion. In fact, it is the go-to technique for logically sorting information in a table. A clustered index doesn’t need a primary key but you can create one on a predefined key-value. Most DBAs recommend creating them on the most-used columns that come under reference of highly frequent query executions. They also reduce the need for optimization since all the data gets sorted to fit them. The primary benefit of using a clustered index is that it speeds up data reads.

Knowing When to Use a Clustered Index

As noted above, using a clustered index leads to better read rates. Therefore, there are several instances where you may need to identify whether a clustered index will improve performance of SQL query rather than a heap.

To do this, you need to follow these steps:

  • First, it is important to understand where there is a requirement for greater read speed.
  • Check dynamic system views and look for large tables without a clustered index.
  • Once you locate a few such tables, you can analyse the plans and stats of queries in the MSSQL system dynamic management views. Searching through the table name in the variable will show you the usage frequency of the plan. It will also show the text fetched and other necessary validation details that show whether a heap or non-clustered index is in use instead.

You will be able to view object names in the second result set in case the table in question is under use in SQL object. Once you have reviewed the query plans relevant to the use cases, you will have sufficient information to help you decide whether the table requires a clustered index or if a heap is more suitable for it. You will also have to choose all the columns that will have to be in the index in case of the former. Tables with several use cases that mostly share the same columns can provide result sets faster with a clustered index.

When Not to Use Clustered Indexes

This is just as important to know because believe it or not, there are instances where a clustered index can do more harm than good to oracle database performance.

A logging table is one such instance as it normally has far more insert operations than reads or updates. This is because their purpose is to log each occurrence but users may not refer to it as frequently. If you place an index on this kind of table, it can result in hot latches due to lagging data insertions for the last available page. Meanwhile, information keeps getting added onto the same page from other means. The one case where this issue doesn’t occur is when the index’s main column is a GUID, therefore, it isn’t sequential.

Using a clustered index in a table with an excessive number of columns isn’t the best idea, either. The reason behind this is simple: the index is supposed to define the default sort order. Too many columns mean repeated resorting with each new use case, slowing down the database. It will also result in an increase in the size of the non-clustered indexes present in the table.

Another situation where a clustered index can’t help is a column that isn’t usually static as they undergo frequent changes. Changing key values on an index have far greater chances of creating performance-related problems. This is because updating key values typically leads to page splits – these need maintenance, which takes resources and affects performance.

How to use FORCE INDEX Hints to tune an UPDATE SQL statement?

improve performance of sql query

We used to use FORCE INDEX hints to enable an index search for a SQL statement if a specific index is not used. It is due to the database SQL optimizer thinking that not using the specific index will perform better.  But enabling an index is not as simple as just adding an index search in the query plan, it may entirely change the structure of the query plan, which means that forecasting the performance of the new Force Index hints is not easy. Here is an example to show you how to use FORCE INDEX optimization hints to tune a SQL statement.

A simple example SQL that updates EMP_SUBSIDIARY if the emp_id is found in EMPLOYEE with certain criteria.

update EMP_SUBSIDIARY set emp_name=concat(emp_name,'(Headquarter)’)
where emp_id in
(SELECT emp_id
  FROM EMPLOYEE
WHERE  emp_salary <1000000
   and emp_grade<1150)

Here the following is the query plan of this SQL, it takes 18.38 seconds. The query shows a Full Table Scan of EMPLOYEE and then Nested Loop to EMP_SUBSIDIARY with a Unique Key Lookup of Emp_sub_PK index.

We can see that the filter condition “emp_salary <1000000 and emp_grade<1150” is used for the full table scan of EMPLOYEE. The estimated “filtered (ratio of rows produced per rows examined): 3.79%”, it seems the MySQL SQL optimizer is failed to use an index to scan the EMPLOYEE table. We should consider forcing MySQL to use either one of emp_salary or emp_grade index.

Unless you fully understand the data distribution and do a very precise calculation, otherwise you are not able to tell which index is the best?

Let’s try to force the index of emp_salary first.

update   EMP_SUBSIDIARY
set    emp_name=concat(emp_name,‘(Headquarter)’)
where emp_id in (select  emp_id
         from    EMPLOYEE FORCE INDEX(`emps_salary_inx`)
         where  emp_salary < 1000000
           and emp_grade < 1150)

This SQL takes 8.92 seconds and is 2 times better than the original query plan without force index hints.

Let’s try to force the index of emp_grade again.

update   EMP_SUBSIDIARY
set    emp_name=concat(emp_name,‘(Headquarter)’)
where emp_id in (select  emp_id
         from    EMPLOYEE FORCE INDEX(`emps_grade_inx`)
         where  emp_salary < 1000000
           and emp_grade < 1150)

Here is the result query plan of the Hints FORCE INDEX(`emps_grade_inx`) injected SQL and the execution time is reduced to 3.95 seconds. The new query plan shows an Index Range Scan of EMPLOYEE by EMP_GRADE index, the result is fed to a subquery2(temp table) and Nested Loop to EMP_SUBSIDIARY for the update. This query plan’s estimated cost is lower and performs better than the original SQL. It is due to the limited plan space in the real-time SQL optimization process, so this query plan cannot be generated for the original SQL text, so manual hints injection is necessary for this SQL statement to help MySQL database SQL optimizer to find a better query plan.

This kind of rewrite can be achieved by Tosska SQL Tuning Expert for MySQL automatically, it shows that the Hints injected SQL is more than 4.6 times faster than the original SQL.

https://tosska.com/tosska-sql-tuning-expert-tse-for-mysql-2/

How to build indexes for multiple Max() functions for SQL Server?

For some SQL statements with multiple Max() functions in the select list and nothing in the Where clause, we have different methods to create new indexes to improve the SQL speed.

Here is an example SQL, it is to retrieve the maximum name and age from the employee table.

select max(emp_name),
     max(emp_age)
from  employee

The following is the query plan that takes 9.27 seconds.

The SQL cannot be tuned by SQL syntax rewrite or hints injection, and the SSMS cannot recommend any index to improve the SQL.

For this kind of SQL that we can consider building a composite index or two individual indexes for emp_name and emp_age.  A new composite of these two columns (emp_age, emp_name) can improve the SQL around 7 times. The following is the query plan shows that the new composite index is used, but it has to scan the entire index for these two stream aggregate operations before getting the max(emp_name) and max(emp_age).

How about if we build two individual indexes for emp_name and emp_age. The following is the result and query plan of these two indexes created. A Top operator selects the first row from each index and returns to the Stream Aggregate operation, and then a Nested Loops join the two maximum results together. It is 356 times much faster than the original SQL.

This kind of indexes recommendation can be achieved by Tosska SQL Tuning Expert Pro for SQL Server automatically.

Tosska SQL Tuning Expert Pro (TSES Pro™) for SQL Server – Tosska Technologies Limited

How is the order of the columns in a composite index affecting a subquery performance for Oracle?

MySQL database and sql

We know the order of the columns in a composite index will determine the usage of the index or not against a table. A query will use a composite index only if the where clause of the query has at least the leading/left-most columns of the index in it. But, it is far more complicated in correlated subquery situations. Let’s have an example SQL to elaborate the details in the following.

SELECT D.*
FROM   department D
WHERE EXISTS (SELECT    Count(*)
         FROM     employee E
         WHERE     E.emp_id < 1050000
                AND E.emp_dept = D.dpt_id
         GROUP BY  E.emp_dept
         HAVING    Count(*) > 124)

Here the following is the query plan of the SQL, it takes 10 seconds to finish. We can see that the SQL can utilize E.emp_id and E.emp_dept indexes individually.

Let’s see if a new composite index can help to improve the SQL’s performance or not, as a rule of thumb, a higher selectivity column E.emp_id will be set as the first column in a composite index (E.emp_id, E.emp_dept).

The following is the query plan of a new composite index (E.emp_id, E.emp_dept) and the result performance is not good, it takes 11.8 seconds and it is even worse than the original query plan.

If we change the order of the columns in the composite index to (E.emp_dept, E.emp_id), the following query plan is generated and the speed is improved to 0.31 seconds.

The above two query plans are similar, the only difference is the “2” operation. The first composite index with first column E.emp_id uses an INDEX RANGE SCAN of the new composite index, but the second query plan uses an INDEX SKIP SCAN for the first column of E.emp_dept composite index. You can see there is an extra filter operation for E.emp_dept in the Predicate Information of INDEX RANGE SCAN of the index (E.emp_id, E.emp_dept). But the (E.emp_dept, E.emp_id) composite index use INDEX SKIP SCAN without extra operation to filter the E.emp_dept again.

So, you have to test the order of composite index very carefully for correlated subqueries, sometimes it will give you improvements that exceed your expectation.

This kind of index recommendation can be achieved by Tosska SQL Tuning Expert for Oracle automatically.

https://tosska.com/tosska-sql-tuning-expert-pro-tse-pro-for-oracle/

Do not undermine your SQL Server’s potential ability

For some SQL statements that are failed to be tuned by syntax rewrite, hints injection, and all necessary indexes are built, people may think that hardware upgrade is the only way to resolve the performance problem. But, please don’t undermine your SQL Server’s SQL optimizer which can provide you with the ultimate performance solution that you may not have imagined before. What you need to do is to provide SQL Server with a set of proper new indexes.

Here is an example SQL, it is to retrieve the minimum employee’s salary and the emp_id that with salary greater than all salary of the emp_subsidiary table with subsidiary’s employees’ department = “AAA”.

SELECT emp_id,
    (SELECT min(emp_salary)
     FROM  employee)
FROM  employee
WHERE emp_salary > (SELECT max(emp_salary)
           FROM emp_subsidiary
           where  emp_dept = ‘AAA’)

Although all columns that show in the SQL are indexed, the following query plan takes 44 seconds.

The SQL cannot be tuned by SQL syntax rewrite or hints injection, and the SSMS can recommend only one index on one table for a SQL statement, it is failed to recommend any good index. So, the SQL cannot be tuned in any traditional way.

Let’s use our new A.I. index recommendation engine to see if there are any good index solutions. A set of indexes is recommended listed in the following. It takes only 0.55 seconds.

Example: 80 times faster A.I. SQL index recommendation

The query plan shows that two new indexes are used at the same time that the SSMS is not able to provide.

Tosska SQL Tuning Expert Pro is in-built with an A.I. engine to recommend indexes for multiple tables at the same time for a SQL statement. The new technology is so powerful to recommend multiple tables’ new indexes for a SQL at the same time, it means that how each new table’s indexes affect each other in the query plan will be considered by the engine. It is very helpful for SQL Server’s SQL optimizer to explore more potential query plans that could not be generated before. So, don’t undermine your SQL Server’s ability. Instead, use the right tool to tune your SQL statements before you are planning to upgrade your hardware.

Tosska SQL Tuning Expert Pro (TSES Pro™) for SQL Server – Tosska Technologies Limited

Tackling Large Tables to Improve MySQL Database Performance

improve MySQL database performance

Oftentimes, database professionals make the mistake of jumping to conclusions when trying to improve MySQL database performance. They assume that the database must be the reason why the application has slowed down. 

In most cases, they may be right- which is why it’s important to start looking for possible bottlenecks and removing them to reduce lag. However, make sure you consider multiple forms of diagnostic data when attempting to uncover the root cause behind poor MySQL database performance. Don’t stick to just monitoring CPU usage or disk IO as relying on a single metric has greater chances of leading you to an incorrect diagnosis.

We need to look at the full picture to understand the complex interdependencies among CPU, memory, and IO. It is important to do so before making reactive changes, such as increasing disk capacity or memory. In this blog, we will take a look at one such reason behind performance bottlenecks- large data volumes.

How Large Data Volumes Affect MySQL Database Performance

Statements that cover a wide scope of data or are unrefined may fetch unreasonably large quantities of information from the database. This doesn’t seem like a problem at first when the database is new and has minimal data.

The true issue emerges as it grows in size, gradually leading to the requirement of Database Server. This is because when a statement fetches data, the data must be scanned into memory. The bigger the size of the data that needs scanning, the greater the load on the CPU, resulting in the need for burst mode due to sudden CPU spikes. This kind of usage increases the chances of your database server crashing.

Additionally, in case the data does make it from the database server, your app server may not be sufficiently provisioned to handle it. Known as over-fetching, you can overcome this problem by limiting the scope of data selection to relevant records. One way to do that is to opt for the WHERE clause in such queries- after you find them, of course.

The key to locating them is by searching through the database logs and metrics for tell-tale signs of large-scale data fetching. Although you might be able to spot CPU spikes or burst credit utilization from these metrics, it might not be easy to tell which statements are causing this specifically.

Things You Can Do to Improve MySQL Database Performance

Query optimization is one of the best places to begin when you have to improve MySQL database performance. But it differs from case to case and is far from a one-size-fits-all endeavor. That said, there are certain tasks that help in a lot of cases:

  • As mentioned above, you can prevent large result sets and decrease data volume by limiting the search to relevant records using the WHERE clause.
  • Go through the database schema to uncover ways that decrease complexity. For instance, keep an eye out on queries that contain a lot of joins since they take more time than most queries. You can make them run faster by reducing their relationships.
  • A large number of queries also fetch unnecessary fields from tables. You can set them to return only those fields that are important to keep from over-fetching again.
  • Views can help in some, but not all cases. A view is similar to a table that you can create beforehand by executing a statement to predetermine values that may require on-the-spot calculation otherwise.
  • Change the syntax of the SQL to influence database SQL optimizer to generate a better query plan.

Conclusion

If your application is performing poorly, the problem often lies with the database, with inefficient queries. While there isn’t any solution that works for every single query out there, database experts can hone in on the ones that require optimization using diligent analysis and monitoring, along with the right SQL optimizer tool for sql server.

After they successfully find the queries behind slow database performance, all they have to do is take the right steps to resolve this issue. These include optimization techniques, such as adding indexes, editing out unnecessary fields, and inserting the WHERE clause wherever necessary.

The Importance of Disk Operations in Query Performance Tuning

Query Performance Tuning

DBAs can’t ignore disk operations when working on query performance tuning. When talking about databases, ‘disk’ may be called by one of its many names, such as ‘storage’, ‘I\O’, ‘Reads’, or disk operations.

Although database professionals know all these terms mean the same thing, these might confuse those outside this field. When referring to one of these terms, they usually mean the number of disk operations required to fetch the data from the Disk resource.

Why You Can’t Ignore Disk Operations During Query Performance Tuning

The fact remains, however, that an overwhelming majority of SQL Server databases face the bottleneck issue when it comes to the disk resource. This doesn’t change, regardless of whether you have old-school hard drives or the latest flash storage arrays. Given below are some major reasons behind this, and how these can be affected with MySQL query optimization:

  1. Most slow queries are slow because they have to scan a large amount of data. A lot of the time, this is unnecessary and it’s making your SQL Server perform a lot of unneeded and really sluggish read operations.
  2. When the database reads data, it requires a place to store that information- which it does in the RAM. However, since the RAM has a limited capacity, older information starts getting removed with newer data coming in.
  3. Because RAM is never enough, it is often unable to store all the data that SQL Server fetches. Therefore, the remaining data has to be kept on the disk, which is far slower than the RAM. The information that isn’t present in the RAM has to be fetched from the disk- an operation that is known as the slowest in all of the database operations. Some DBAs even compare data fetching from the RAM and the disk to sprinting and tip-toeing.
  4. So, if we tune a query to read less data than it did before, such as twenty rows instead of twenty thousand, it will help in two ways. Not only will it reduce the workload on the database in terms of disk operations, but it will also require far fewer resources, including CPU and RAM, to process all the data. That said, the end-user is unaware of all these operations- all they know and appreciate is the speed or the time it takes for the query to fetch information. To put it simply, they just want the screen on the app to return as quickly as possible. This is why query performance tuning focuses on decreasing disk reads.
  5. DBAs also perform tuning to lower the other resources, such as CPU or RAM usage. But they only do this in certain special situations where such resources are consistently being overused at dangerous levels. For instance, if the CPU is in constant use of 90% or above, then the DBA will consider CPU tuning.
  6. Tuning queries that fetch large volumes of data to fetch much smaller volumes instead improves SQL Server capacity. This is because when a query takes up fewer resources, it leaves room for more users and queries. This allows the same server to take greater loads than it could. Performing MySQL query optimization also improves the lifespan of the same server, delaying the requirement for a hardware upgrade.

Summing Up

The above-mentioned reasons shed light on the fact that disk operations play a major role in enabling efficient database query performance. You can’t always blame the CPU; in fact, you can rarely do so since 95% of bottlenecks occur on the disk resource.

The CPU, on the other hand, is only a lagging indicator whose use can decrease if the storage reads differ.

Backup and Recovery in SQL Server: Understanding the Basics (Part 2)

SQL Server

This blog is the continuation of the 2-part series to explain recovery models in SQL Server.

Each of these aims at different requirements to provide partial or complete data recovery. The Database Administrator selects the recovery model depending on the resource and data requirements of the organization. The goal of the recovery model is to balance the logging overhead with data recovery criticality.

Types of Recovery Models for SQL Server

Given below are the recovery models you can use in your backup and restore strategy:

Simple: This model does not support transaction log backups. SQL Server directly truncates the checkpoint operations log when using this recovery model. This frees up transaction log space to store additional transactions.

Although the Simple Recovery model is the simplest with regards to t-log backup management, it makes the user unable to carry out point-in-time database restores. This can lead to devastating data losses when your data changes on a frequent basis and your backups (full or differential) aren’t run regularly.

In other words, the frequency of your backups will determine the amount of data loss you are likely to experience if you have to restore your database using the Simple Recovery model.

Full: The use of this model will ensure the t-log file holds all your transactions until you run a t-log backup. No automatic truncation will take place here, unlike in the Simple Recovery model. Moreover, the Full Recovery model enables users to restore their database to any point in time as long as the transaction log backup has it. This minimizes data loss but is more likely to affect oracle database performance.

It is important to remember when using this model that the t-log will keep saving information as you make changes to the database. Therefore, you’ll have to carry out transaction log backups on a frequent basis to keep them from getting too large. Creating a t-log backup clears the previously-stored data, making space available to store new transactions.

The amount of disk space the transaction log takes does not change, nor should the user expect it to do so. That said, while you will have to pre-size the transaction log on the basis of expected activity, you can set its size to auto-grow in case it uses up all of its space. However, try to refrain from shrinking these files unless you don’t have a choice. These files are typically shrunk with the help of T-SQL commands in SQL Server.

Bulk-Logged: It shares a lot of similarities with the Full Recovery model, except for its minimal logging feature. In it, certain bulk operations aren’t fully logged in the transaction log, such as TRUNCATE, BULK import, and SELECT INTO. These operations are called minimally logged operations, thanks to which your t-logs won’t grow as much in size as compared to the Full Recovery model.

On the other hand, this type of operation keeps users from carrying our point-in-time restores. This is a disadvantage for many as it increases the chances of critical data loss. Therefore, experts recommend sticking with the Full Recovery model in cases where you’re unsure whether this model is the perfect choice according to your requirements. Despite its performance-affecting results, you will be able to guarantee data availability. You can still use SQL tuning tools to rectify this issue.

In Conclusion

Using the right recovery model, you can recreate and restore the entire database data in one step. This process overwrites the current database or creates it in case the database no longer exists. The ‘new’ database will be identical to the condition of the database when the backup took place, without the proceeding uncommitted transactions. These are rolled back once database recovery has taken place.