How to Tune SQL with IN Subquery with Intersect for Oracle?

Here is an example SQL that retrieves data from EMPLOYEE and DEPARTMENT table with the employee’s grade code in the GRADE table.

SELECT emp_id,
       emp_name,
       dpt_name
FROM   employee,
       department
WHERE  emp_dept = dpt_id
       AND emp_grade IN (SELECT grd_id
                         FROM grade
                         WHERE grd_min_salary < 200000)
and emp_dept < ‘D’

Here the following is the query plan of this SQL, it takes 8.3 seconds to finish. The query plan shows a Hash Join with GRADE and EMPLOYEE and then hash join to DEPARTMENT. It looks like Oracle gave up any Nested Loops operations after the actual number of rows is returned from the GRADE table in this adaptive plan.

In order to ask Oracle to consider the Nested Loops operations, I added an extra Intersect operation in the subquery to rapidly narrow down the result set of grd_id returned from the GRADE table first.

SELECT emp_id,
       emp_name,
       dpt_name
FROM   employee,
       department
WHERE  emp_dept = dpt_id
       AND emp_grade IN (SELECT grd_id
                         FROM   grade
                         WHERE  grd_min_salary < 200000                          INTERSECT SELECT e1.emp_grade
                                   FROM employee e1
                                   WHERE emp_dept < ‘D’)
       AND emp_dept < ‘D’

The rewritten SQL generates a query plan that is entirely different from the original query plan, The new plan is using “Nested Loops” from DEPARTMENT to EMPLOYEE as the first steps and then Hash Join to the GRADE table. The new plan now takes 0.81 seconds only.


This kind of rewrite can be achieved by Tosska SQL Tuning Expert Pro for Oracle automatically, it shows that the rewrite is more than 10 times faster than the original SQL.

https://tosska.com/tosska-sql-tuning-expert-pro-tse-pro-for-oracle/

How to Tune SQL Statement with LCASE function on index field?

Some business requirements may need to compare the lower case of an indexed column to a given string as a data retrieval criterion.

Here is an example SQL that retrieves records from the EMPLOYEE table employee if the lower case of the name is equal to the string ‘richard’.

select  *
  from employee
where LCASE(emp_name)=‘richard’

Here the following are the query plans of this SQL, it takes 17 seconds to finish. The query shows a “Full Table Scan Employee”  

You can see that this SQL cannot utilize index scan even if the emp_name is an indexed field. Let me add a “Force Index(emp_name_inx)“hint to the SQL and hope it can help MySQL SQL optimizer to use index scan, but it fails to enable the index scan anyway, so I add one more dummy condition “emp_name >= ””, it is an always true condition that emp_name should be greater or equal to a smallest empty character, it is used to increase the cost of not using emp_name_inx index. There is another condition added “emp_name is null” to correct this condition if emp_name is a null value.

select  *
from   employee force index(EMPS_NAME_INX)
where  LCASE(emp_name) = ‘richard’
     and ( emp_name >=
        or emp_name is null )

Here is the query plan of the rewritten SQL and it is running much faster. The new query plan shows that an Index Scan is used now and takes 2.79 seconds only.

This kind of rewrite can be achieved by Tosska SQL Tuning Expert for MySQL automatically, it shows that the rewrite is more than 6 times faster than the original SQL.

https://tosska.com/tosska-sql-tuning-expert-tse-for-mysql-2/

How to use ROWID to improve an UPDATE statement for Oracle?

Here the following is an Update SQL with a subquery that updates the EMPLOYEE table if the emp_dept satisfies the records returned from a subquery.

update  employee
   set  emp_name = ‘testing’
 where  emp_dept IN (select dpt_id
            from department
          where dpt_name like ‘A%’)
and emp_grade>2000

You can see Oracle uses a Hash join of the DEPARTMENT table and EMPLOYEE table to execute the update process. This query plan takes 1.96 seconds to complete and no index is used even though emp_dept, dpt_id, and emp_grade are indexed columns. It looks like the most expansive operation is the Table Access Full scan of the EMPLOYEE table.

Let’s rewrite the SQL into the following syntax to eliminate EMPLOYEE’s Table Access Full operation from the query plan.  The new subquery with the italic Bold text is used to force the EMPLOYEE to extract records with emp_dept in the DEPARTMENT table with the dpt_name like ‘A%’. The ROWID returned from the EMPLOYEE(subquery) is to make sure a more efficient table ROWID access to the outer EMPLOYEE table.

UPDATE  employee
SET   emp_name=‘testing’
WHERE   ROWID IN (SELECT  ROWID
          FROM   employee
          WHERE  emp_dept IN (SELECT  dpt_id
                      FROM   department
                      WHERE  dpt_name LIKE‘A%’))
     AND emp_grade > 2000

You can see the final query plan with this syntax has a better cost without full table access to the EMPLOYEE table. The new syntax takes 0.9 seconds and it is more than 2 times faster than the original syntax.

This kind of rewrite can be achieved by Tosska SQL Tuning Expert Pro for Oracle automatically, there is another SQL rewrite with similar performance, but it is not suitable to discuss in this short article, maybe I can discuss it later in my blog.

https://tosska.com/tosska-sql-tuning-expert-pro-tse-pro-for-oracle/

How to build indexes for multiple Max() functions for SQL Server?

For some SQL statements with multiple Max() functions in the select list and nothing in the Where clause, we have different methods to create new indexes to improve the SQL speed.

Here is an example SQL, it is to retrieve the maximum name and age from the employee table.
select   max(emp_name),
     max(emp_age)
from  employee

The following is the query plan that takes 9.27 seconds.

The SQL cannot be tuned by SQL syntax rewrite or hints injection, and the SSMS cannot recommend any index to improve the SQL.

For this kind of SQL that we can consider building a composite index or two individual indexes for emp_name and emp_age. A new composite of these two columns (emp_age, emp_name) can improve the SQL around 7 times. The following is the query plan shows that the new composite index is used, but it has to scan the entire index for these two stream aggregate operations before getting the max(emp_name) and max(emp_age).

How about if we build two individual indexes for emp_name and emp_age. The following is the result and query plan of these two indexes created. A Top operator selects the first row from each index and returns to the Stream Aggregate operation, and then a Nested Loops join the two maximum results together. It is 356 times much faster than the original SQL.

This kind of indexes recommendation can be achieved by Tosska SQL Tuning Expert Pro for SQL Server automatically:
Tosska SQL Tuning Expert Pro (TSES Pro™) for SQL Server – Tosska Technologies Limited

How to build indexes for slow first execution SQL – SQL Server?

You may suffer from SQL statements with a slow first execution time due to the long data cache process. The following SQL is simple that retrieves records from the EMPLOYEE table that if EMP_SALARY < 500000 and the result set is ordered by EMP_NAME.

Select emp_id,
    emp_name,
    emp_salary,
    emp_address,
    emp_telephone
from    employee
where  emp_salary < 500000
order by emp_name;

The following is the query plan that takes 9.51 seconds for the first execution and takes 0.99 seconds for the second execution without data cache.

The SQL cannot be tuned by SQL syntax rewrite or hints injection for both the first execution and the second execution, it is because SQL Server has selected the best query plan for this simple SQL statement. But the problem is that if the condition “where emp_salary < 500000” is changed; say from 500000 to 510000 or the EMPLOYEE data is flushed out from the memory, the execution time will then be prolonged up to 9.51 seconds.

Let’s see if we can build indexes to improve this situation. There is a common perception that a good index can help to improve both the first execution time and the second execution time. So, I use a tool to explore a lot of indexes configurations, but none of them can improve both executions’ performance. Here the following is the performance of the second execution with data cached for different indexes proposed by the tool. You can see the performance of “Index Set 1” is close to the original SQL performance with a little performance variation due to the system’s loading status and all other indexes sets are worse than the original SQL. Normally, we will give up the tuning of the SQL statement without even trying to see whether those recommended indexes are good for the first execution time.

I did a test for those recommended indexes to see whether they are helpful to improve the first execution time, it surprises me that the “Index Set 1” is tested with a significant improvement and improves the first execution time from 9.51 seconds to 0.65 seconds. It is a 14 times improvement that can make my database run more efficiently. So, you should be very careful to tune your SQL with new indexes that may not be good for your second execution with all data cached, but it may be very good for your first execution without data cached.

This kind of indexes recommendation can be achieved by Tosska SQL Tuning Expert Pro for SQL Server automatically.

Tosska SQL Tuning Expert Pro (TSES Pro™) for SQL Server – Tosska Technologies Limited

How to index SQL with aggregate function SQL for Oracle?

Here the following is an example SQL shows you that select the maximum emp_address which is not indexed in the EMPLOYEE table with 3 million records, the emp_grade is an indexed column.

select max(emp_address) from employee a
where emp_grade<4000

As 80% of the EMPLOYEE table’s records will be retrieved to examine the maximum emp_address string. The query plan of this SQL shows a Table Access Full on EMPLOYEE table is reasonable.

How many ways to build an index to improve this SQL?
Although it is simple SQL, there are still 3 ways to build an index to improve this SQL, the following are the possible indexes that can be built for the SQL, the first one is a single column index and the 2 and 3 are the composite index with a different order.
1. EMP_ADDRESS
2. EMP_GRADE, EMP_ADDRESS
3. EMP_ADDRESS, EMP_GRADE

Most people may use the EMP_ADDRESS as the first choice to improve this SQL, let’s see what the query plan is if we build a virtual index for the EMP_ADDRESS column in the following, you can see the estimated cost is reduced by almost half, but this query plan is finally not being used after the physical index is built for benchmarking due to actual statistics is collected.

The following query shows the EMP_ADDRESS index is not used and the query plan is the same as the original SQL without any new index built.

Let’s try the second composite index (EMP_GRADE, EMP_ADDRESS), the new query plan shows an Index Fast Full Scan of this index, it is a reasonable plan which no table’s data is needed to retrieve. So, the execution time is reduced from 16.83 seconds to 3.89 seconds.

Let’s test the last composite index (EMP_ADDRESS, EMP_GRADE) that EMP_ADDRESS is placed as the first column in the composite index, it creates a new query plan that shows an extra FIRST ROW operation for the INDEX FULL SCAN (MIN/MAX), it highly reduces the execution time from 16.83 seconds to 0.08 seconds.

So, indexing sometimes is an art that needs you to pay more attention to it, some potential solutions may perform excess your expectation.

The best index solution is now more than 200 times better than the original SQL without index, this kind of index recommendation can be achieved by Tosska SQL Tuning Expert for Oracle automatically.

https://tosska.com/tosska-sql-tuning-expert-pro-tse-pro-for-oracle/

How to use FORCE INDEX Hints to tune an UPDATE SQL statement?

improve performance of sql query

We used to use FORCE INDEX hints to enable an index search for a SQL statement if a specific index is not used. It is due to the database SQL optimizer thinking that not using the specific index will perform better.  But enabling an index is not as simple as just adding an index search in the query plan, it may entirely change the structure of the query plan, which means that forecasting the performance of the new Force Index hints is not easy. Here is an example to show you how to use FORCE INDEX optimization hints to tune a SQL statement.

A simple example SQL that updates EMP_SUBSIDIARY if the emp_id is found in EMPLOYEE with certain criteria.

update EMP_SUBSIDIARY set emp_name=concat(emp_name,'(Headquarter)’)
where emp_id in
(SELECT emp_id
  FROM EMPLOYEE
WHERE  emp_salary <1000000
   and emp_grade<1150)

Here the following is the query plan of this SQL, it takes 18.38 seconds. The query shows a Full Table Scan of EMPLOYEE and then Nested Loop to EMP_SUBSIDIARY with a Unique Key Lookup of Emp_sub_PK index.

We can see that the filter condition “emp_salary <1000000 and emp_grade<1150” is used for the full table scan of EMPLOYEE. The estimated “filtered (ratio of rows produced per rows examined): 3.79%”, it seems the MySQL SQL optimizer is failed to use an index to scan the EMPLOYEE table. We should consider forcing MySQL to use either one of emp_salary or emp_grade index.

Unless you fully understand the data distribution and do a very precise calculation, otherwise you are not able to tell which index is the best?

Let’s try to force the index of emp_salary first.

update   EMP_SUBSIDIARY
set    emp_name=concat(emp_name,‘(Headquarter)’)
where emp_id in (select  emp_id
         from    EMPLOYEE FORCE INDEX(`emps_salary_inx`)
         where  emp_salary < 1000000
           and emp_grade < 1150)

This SQL takes 8.92 seconds and is 2 times better than the original query plan without force index hints.

Let’s try to force the index of emp_grade again.

update   EMP_SUBSIDIARY
set    emp_name=concat(emp_name,‘(Headquarter)’)
where emp_id in (select  emp_id
         from    EMPLOYEE FORCE INDEX(`emps_grade_inx`)
         where  emp_salary < 1000000
           and emp_grade < 1150)

Here is the result query plan of the Hints FORCE INDEX(`emps_grade_inx`) injected SQL and the execution time is reduced to 3.95 seconds. The new query plan shows an Index Range Scan of EMPLOYEE by EMP_GRADE index, the result is fed to a subquery2(temp table) and Nested Loop to EMP_SUBSIDIARY for the update. This query plan’s estimated cost is lower and performs better than the original SQL. It is due to the limited plan space in the real-time SQL optimization process, so this query plan cannot be generated for the original SQL text, so manual hints injection is necessary for this SQL statement to help MySQL database SQL optimizer to find a better query plan.

This kind of rewrite can be achieved by Tosska SQL Tuning Expert for MySQL automatically, it shows that the Hints injected SQL is more than 4.6 times faster than the original SQL.

https://tosska.com/tosska-sql-tuning-expert-tse-for-mysql-2/

How to build indexes for multiple Max() functions for SQL Server?

For some SQL statements with multiple Max() functions in the select list and nothing in the Where clause, we have different methods to create new indexes to improve the SQL speed.

Here is an example SQL, it is to retrieve the maximum name and age from the employee table.

select max(emp_name),
     max(emp_age)
from  employee

The following is the query plan that takes 9.27 seconds.

The SQL cannot be tuned by SQL syntax rewrite or hints injection, and the SSMS cannot recommend any index to improve the SQL.

For this kind of SQL that we can consider building a composite index or two individual indexes for emp_name and emp_age.  A new composite of these two columns (emp_age, emp_name) can improve the SQL around 7 times. The following is the query plan shows that the new composite index is used, but it has to scan the entire index for these two stream aggregate operations before getting the max(emp_name) and max(emp_age).

How about if we build two individual indexes for emp_name and emp_age. The following is the result and query plan of these two indexes created. A Top operator selects the first row from each index and returns to the Stream Aggregate operation, and then a Nested Loops join the two maximum results together. It is 356 times much faster than the original SQL.

This kind of indexes recommendation can be achieved by Tosska SQL Tuning Expert Pro for SQL Server automatically.

Tosska SQL Tuning Expert Pro (TSES Pro™) for SQL Server – Tosska Technologies Limited

The Importance of Disk Operations in Query Performance Tuning

Query Performance Tuning

DBAs can’t ignore disk operations when working on query performance tuning. When talking about databases, ‘disk’ may be called by one of its many names, such as ‘storage’, ‘I\O’, ‘Reads’, or disk operations.

Although database professionals know all these terms mean the same thing, these might confuse those outside this field. When referring to one of these terms, they usually mean the number of disk operations required to fetch the data from the Disk resource.

Why You Can’t Ignore Disk Operations During Query Performance Tuning

The fact remains, however, that an overwhelming majority of SQL Server databases face the bottleneck issue when it comes to the disk resource. This doesn’t change, regardless of whether you have old-school hard drives or the latest flash storage arrays. Given below are some major reasons behind this, and how these can be affected with MySQL query optimization:

  1. Most slow queries are slow because they have to scan a large amount of data. A lot of the time, this is unnecessary and it’s making your SQL Server perform a lot of unneeded and really sluggish read operations.
  2. When the database reads data, it requires a place to store that information- which it does in the RAM. However, since the RAM has a limited capacity, older information starts getting removed with newer data coming in.
  3. Because RAM is never enough, it is often unable to store all the data that SQL Server fetches. Therefore, the remaining data has to be kept on the disk, which is far slower than the RAM. The information that isn’t present in the RAM has to be fetched from the disk- an operation that is known as the slowest in all of the database operations. Some DBAs even compare data fetching from the RAM and the disk to sprinting and tip-toeing.
  4. So, if we tune a query to read less data than it did before, such as twenty rows instead of twenty thousand, it will help in two ways. Not only will it reduce the workload on the database in terms of disk operations, but it will also require far fewer resources, including CPU and RAM, to process all the data. That said, the end-user is unaware of all these operations- all they know and appreciate is the speed or the time it takes for the query to fetch information. To put it simply, they just want the screen on the app to return as quickly as possible. This is why query performance tuning focuses on decreasing disk reads.
  5. DBAs also perform tuning to lower the other resources, such as CPU or RAM usage. But they only do this in certain special situations where such resources are consistently being overused at dangerous levels. For instance, if the CPU is in constant use of 90% or above, then the DBA will consider CPU tuning.
  6. Tuning queries that fetch large volumes of data to fetch much smaller volumes instead improves SQL Server capacity. This is because when a query takes up fewer resources, it leaves room for more users and queries. This allows the same server to take greater loads than it could. Performing MySQL query optimization also improves the lifespan of the same server, delaying the requirement for a hardware upgrade.

Summing Up

The above-mentioned reasons shed light on the fact that disk operations play a major role in enabling efficient database query performance. You can’t always blame the CPU; in fact, you can rarely do so since 95% of bottlenecks occur on the disk resource.

The CPU, on the other hand, is only a lagging indicator whose use can decrease if the storage reads differ.

Query Performance Tuning: Making an SQL Monitor Report

Creating a SQL Monitor Report plays an important role in database optimization as it helps the user observe other occurrences during the execution of long-running statements. 

In this post, we’ll discuss how to create one such report that may help you during query performance tuning

SQL Monitor Report: Bringing DBAs One Step Closer to Database Query Optimization

To begin with, you need to make sure your database has the tuning and diagnostic pack. Otherwise, Oracle will not authorize the creation of SQL Monitor Reports.

Also, such reports can be made after an adequate amount of time has passed. The wait is to allow query bottlenecks to reveal themselves. This is typically done for seemingly endless queries that run for long periods of time. However, in general, creating SQL monitor reports is recommended for completed queries.  

Let’s look at an example: A DBA has a simple plan with a hash join involving two big tables. Suppose one of these tables takes two seconds to undergo a complete table scan, whereas the second one takes nine seconds. 

Although only around two seconds out of a total of eleven seconds are sent on the first table, it will appear as though a hundred percent of the query time is being spent on it if you create a SQL Monitor report during the first two seconds.

Creating Reports for Excessively Long Execution Plans

Really long execution plans – those that exceed three hundred lines – don’t have a SQL monitor report generated for them by default. This gets cumbersome because long execution plans are where these reports are needed the most! 

In such cases, there are two things you can do to make the database generate a report. These are – 

  1. Prior to issuing the query in question, generate the following in the session operating the query:

alter session set “_sqlmon_max_planlines” = 800;

2. Apply the following hint while executing the query: 

/* + monitor */ 

How to Create an HTML Version of the Monitor Report 

The HTML version of a SQL Monitor report offers some more details as compared to its text report. This is why it is often recommended by database professionals, with the help of the following query: 

Select dbms_sqltune.report_sql_monitor(

sql_id => ‘&v_sql_id.’,

Session_id => ‘&v_session_id.’,

Session_serial => ‘&v_serial.’,

Type => ‘HTML’,

Report_level => ‘ALL’,

Inst_num => ‘&v_instance.’ )report

from dual;

Not every variable needs to be plugged in – you just require variables sufficient to enable Oracle to recognize the particular SQL\session combination. And if there is only a single session executing the statement on the entire database, only the sql_id is enough.

Creating a Text Monitor Report Instead

In case you’d rather make a text report – whether if it’s due to some problems with an HTML report, or simply preference – here’s how to do it – 

Select dbms_sqltune.report_sql_monitor(

sql_id => ‘&v_sql_id.’,

Session_id => ‘&v_session_id.’,

Session_serial => ‘&v_serial.’,

Type => ‘TEXT’,

Report_level => ‘ALL’,

Inst_num => ‘&v_instance.’ )report

from dual;