How to use Query Rewriter Plugin to tune SQL in MySQL database I

The Query Rewriter Plugin in MySQL is a component that allows you to modify incoming SQL queries before execution. It provides the ability to transform, route, filter, or expand queries based on specific requirements. This plugin operates at the SQL layer and can be utilized for optimizing query performance, enforcing security policies, implementing data partitioning strategies, or adding additional business logic to queries. With the Query Rewriter Plugin, you have the power to customize and shape SQL queries to meet your specific needs, providing flexibility and control over query execution within the MySQL server.

The Query Transformation feature enables you to rewrite or transform the original query into an equivalent or more efficient form. This can be useful for optimizing performance, simplifying complex queries, or enforcing certain query plans.

You must install Query Rewriter Plugin before using this feature, the concept of Query Rewriter is simple, it is a set of predefined SQL statements that is used to replace a certain pattern of SQL statements that is fired from your application programs.

If you have installed the plugin, the following SQL statements can be used to defined your SQL replacement rules and error message handling.

INSERT INTO query_rewrite.rewrite_rules (message, pattern, replacement)
VALUES(Unique_ID, Original_SQL, Rewrite_SQL);

The query_rewrite.rewrite_rules table in MySQL stores the rules used by the Query Rewriter Plugin to rewrite SQL queries. The table has two columns:

Pattern – This column represents the pattern or condition that triggers the rewriting of a SQL query. It defines the specific query or query pattern to match.

Replacement – This column specifies the replacement or transformation that should be applied to the matched query or query pattern.

When a SQL query is executed, the Query Rewriter Plugin checks the query_rewrite.rewrite_rules table for matching patterns. If a pattern matches the executed query, the plugin rewrites the query using the corresponding replacement. This allows you to modify the query structure, optimize it, or add custom logic based on specific patterns or conditions.

I utilize the message column to define a temporary unique id for the SQL replacement rule, so the actual rule id can be extracted with the following SQL.

SELECT id into :SID FROM query_rewrite.rewrite_rules where message=Unique_ID;

When you make changes to the query rewrite rules in the query_rewrite.rewrite_rules table, those changes are not immediately applied. Instead, MySQL caches the rules in memory for better performance. However, if you want to ensure that the updated rules take effect immediately, you can call the query_rewrite.flush_rewrite_rules() function.

CALL query_rewrite.flush_rewrite_rules();

If a load error occurs, the plugin also sets the Rewriter_reload_error status variable to ON and the error message will be stored in the Message column.

SELECT message FROM query_rewrite.rewrite_rules where id=:SID;

Actually, the Query Rewriter Plugin is powerful and easy to use. The most challenging aspect is finding a replacement SQL for your poorly performing SQL statement. Tosska DB Ace Enterprise for MySQL can assist you in automating this process, from identifying poorly performing SQL statements to rewriting SQL syntax and deploying replacement rules.

Tosska DB Ace Enterprise for MySQL – Tosska Technologies Limited

DBAM Tune Rewriter demo – YouTube

How to tune SQL in a Database Object with Plan Guide for SQL Server ?

Leveraging Plan Guides in MS SQL Server can prove beneficial for optimizing the performance of a particular query within database objects like stored procedures and trigger applications. This technique allows you to enhance query efficiency without requiring modifications to the application’s source code.

Here are the steps to use Plan Guides to tune third-party applications SQL in MS SQL Server without changing the source code:

  1. Identify the SQL that are causing performance issues in the database object.
  2. Create a Plan Guide that offers an optimized execution plan for the specified query by incorporating query hints to influence the decision-making process of the optimizer.
  3. Test the Plan Guide to ensure that it provides the desired performance improvements and does not cause any unintended side effects.
  4. Deploy the Plan Guide to production and monitor the performance of the application to ensure that the Plan Guide is being used and is providing the desired performance improvements.

Before optimizing an database object’s SQL statement executed by an application program without modifying the source code, it is crucial to understand how the SQL statement matches the one specified in the Plan Guide, which includes whitespace and comments. Additionally, it is important to match the database object that execute the SQL statement.
Below is an example that demonstrates how to optimize a SQL statement from a database object called getRD2. The SQL statement is selected and highlighted by the user.

Creating a plan guide may initially seem complex, but it is a valuable method for improving SQL performance without modifying the source code or lacking the necessary permissions. The most time-consuming part involves finding the best query hint for the SQL statement using the @hints = N’OPTION(query_hint [ ,…n ]) parameter in the sp_create_plan_guide system procedure. If you don’t have extensive knowledge of SQL tuning techniques or enough time for experimentation, a solution is available that simplifies the process. It captures SQL, identifies the SQL source type, automatically optimizes query hints, and facilitates easy deployment of plan guides. The following product automatically identifies a Plan Guide, as depicted in the accompanying screenshot, which can enhance SQL performance by 75.81%.

After identifying the optimal Plan Guide, we can proceed to deploy it along with the stored procedure to the SQL Server database. This deployment will result in improved performance for the stored procedure named getRD2, all without requiring any modifications to the source code of the stored procedure.

You also have the option to manually create the Plan Guide using the system procedure sp_create_plan_guide once you have obtained a suitable hint solution for the SQL statement.

For detailed information, kindly visit our website and take a look at our demo video.

Tosska DB Ace Enterprise for SQL Server – Tosska Technologies Limited
DBAS Tune SQL PG Object – YouTube

How to analyze the performance impact of a database environment change using what-if analysis ?

To properly evaluate the impact of a set of SQL statements during a database environment change, it’s crucial to have a thorough understanding of how SQL query performance can be impacted. There are two primary types of performance changes that can occur in SQL queries. The first type, which I refer to as “progressive change,” is typically caused by changes in statistics, such as fluctuations in data volume in relevant tables or index pages. If the statistics change is not significant enough to trigger a new query plan, the query plan will remain the same, and the performance of the SQL query will not be significantly altered compared to the original statistics.

The second type of performance change, which I refer to as “quantum change” occurs when a new query plan is introduced due to a significant change in statistics or schema. This type of change can have a major impact on performance, occasionally resulting in performance disasters.

When making changes to the database environment, it is essential to closely monitor the performance of SQL queries and take appropriate measures to optimize the affected statements. To track critical SQL statements before and after environment changes, there are some general steps you can follow :

  1. Extract the SQL statements along with their query plan and performance statistics from SGA or AWR.
  2. Apply the environment changes to the database, such as creating new indexes, gathering statistics, upgrading the database, or forecasting the performance of a software deployment in the production database.
  3. Obtain the query plan from the changed database environment.
  4. Compare the query plan for each SQL statement to identify any differences.
  5. Look for potential issues such as unused indexes, high-cost SQL changes, etc.
  6. Benchmark the query plan for the modified SQL statements to detect any degraded performance.

If you only have a small number of SQL statements to track in terms of performance before and after environment changes, the above steps can be done manually. However, it can be challenging if you have hundreds of SQL statements to monitor without a tool. Tosska DB Ace for Oracle is equipped with a robust tool that can assist you in tracking the performance differences of SQL statements between two databases.

DBAO SQL Performance Tracker – YouTube
Tosska DB Ace Enterprise for Oracle – Tosska Technologies Limited

How To Use 80/20 Rule To Tune A Database Application II ?

The previous article “How To Use 80/20 Rule To Tune A Database Application I “ demonstrated how the 80/20 Rule can be applied to evaluate the overall performance of SQL workload in a database. In this example, a set of 90 SQL statements retrieved from Oracle SGA is presented in a chart that lists each statement based on its resource usage in descending order, with the most resource-intensive SQL on the left. The analysis reveals that roughly 14.44% of the SQL statements consume 80% of the total elapsed time, while 21.11% of the SQL statements consume 80% of the total CPU time, indicating that the SQL workload distribution aligns well with the 80/20 rule. Therefore, tuning the SQL may not be necessary since it is unlikely to result in significant performance improvements.

However, to further optimize the database performance cost-effectively, it is recommended to conduct an in-depth investigation of the top 20% of high workload SQL statements. This will reveal that the resource utilization drops steeply in the first few SQL statements, making them the most critical candidates for optimization.

Let’s aim to reduce the proportion of the total resource consumption from 80% to 60% and examine the SQL statements that are responsible for utilizing the resources. The results are interesting and reveal that 3 SQL statements account for 60% of the elapsed time, 6 SQL statements account for 60% of the CPU time, and only one SQL statement accounts for 60% of the disk reads. By focusing on these SQL statements, it is possible to enhance up to 60% of the database workload. For instance, if the database is experiencing an IO bottleneck, concentrating on the one SQL statement could yield savings of up to 60% on disk reads.

You can utilize Excel to conduct a simulation of the 80/20 rule analysis described above, providing a comprehensive overview of the distribution of the SQL workload. This approach facilitates a rapid evaluation of the overall health of the database’s SQL performance, as well as the associated costs and benefits of optimizing high workload SQL statements. Furthermore, the SQL resource spectrum analysis is integrated into our Tosska DB Ace for Oracle software.

Tosska DB Ace Enterprise for Oracle – Tosska Technologies Limited

DBAO Inspect SQL – YouTube

How To Use 80/20 Rule To Tune A Database Application I ?

In 2002, Microsoft reported that the majority of errors and crashes in their software were caused by a minority of the bugs they detected, specifically about 20%. This phenomenon, known as the Pareto principle or the eighty-twenty rule, is also relevant to the analysis of database SQL behavior. It is assumed that only 20% of SQL statements should account for 80% of the resource consumption of the entire database system. Any application system in which less than 20% of SQL statements consume more than 80% of database resources is considered abnormal and should be reviewed and optimized.

Using a SQL resource spectrum analysis technique, users have the ability to set a resource threshold percentage, typically around 80%, to pinpoint the top M% of SQL statements that are responsible for the majority of resource consumption. If the analysis reveals that only a few SQL statements, such as only 4%, are responsible for over 80% of the total resource consumption, it suggests that optimizing these specific SQL statements has the potential to significantly enhance the performance of the entire database.

The presented example displays a collection of 90 SQL statements obtained from Oracle SGA, with each statement listed in a chart based on its resource usage, arranged in descending order starting from the highest consumption SQL on the left-hand side. The analysis reveals that approximately 14.44% of the SQL statements account for 80% of the total elapsed time, while 21.11% of the SQL statements account for 80% of the total CPU time. This indicates that the distribution of the SQL workload aligns closely with the 80/20 rule. Therefore, there may not be a pressing need for SQL tuning as it is unlikely to result in significant performance improvements.

Another example highlights a significant deviation from the 80/20 rule in the distribution of the SQL workload. The analysis shows that around 4 SQL statements are accountable for 80% of the total elapsed time, while another 4 SQL statements are responsible for 80% of the total CPU time, and 2 SQL statements contribute to 80% of the total disk reads. This indicates that by optimizing just these 4 SQL statements, it is possible to achieve a substantial improvement in the overall performance of the database.

You can utilize Excel to conduct a simulation of the 80/20 rule analysis described above, providing a comprehensive overview of the distribution of the SQL workload. This approach facilitates a rapid evaluation of the overall health of the database’s SQL performance, as well as the associated costs and benefits of optimizing high workload SQL statements. Furthermore, the SQL resource spectrum analysis is integrated into our Tosska DB Ace for Oracle software.

Tosska DB Ace Enterprise for Oracle – Tosska Technologies Limited

DBAO Inspect SQL – YouTube

How to Tune SQL with DB Link for Oracle II ?

Here is an example SQL,  the query is retrieving employee, department, and grade tables from the remote database @richdb

SELECT   *
FROM     emp_subsidiary@richdb a,
         department@richdb,
         grade@richdb
WHERE    emp_grade < 1200
         AND emp_dept = dpt_id
         AND emp_grade = grd_id
ORDER BY emp_id

Here the following is the query plan of this SQL, it takes 15.92 seconds to finish.  The first step of the query plan is ‘SELECT STATEMENT REMOTE’, it means the entire query will be execute on the remote database @richdb and the result will be sent back to the local database. The query plan is a little bit complicated and not easy to tell it is optimal or not. But one thing we can try if the query is partially executed in the local database @local.

In order to request that Oracle perform certain join operations in the local database, the SQL query must include at least one table that is executed in the local database. This allows the use of the hint /*+ DRIVING_SITE ( [ @ queryblock ] tablespec ) */ in the SQL query. If no tables are explicitly executed in the local database, there is no means to request that Oracle attempt to perform join operations in the local database.

Let’s added a dummy condition “EXISTS (SELECT ‘X’ FROM DUAL)” and a hints /*+ DRIVING_SITE(DUAL) */ to the SQL to force Oracle to execute some join operations in the local database.

SELECT   /*+ DRIVING_SITE(DUAL) */ *
FROM     emp_subsidiary@richdb a,
         department@richdb,
         grade@richdb
WHERE    emp_grade < 1200
         AND emp_dept = dpt_id
         AND emp_grade = grd_id
         AND EXISTS ( SELECT ‘x’
                      FROM   dual)
ORDER BY emp_id

Below is the query plan for the modified SQL, which takes 4.08 seconds and is approximately 4 times faster than the original SQL statement where only one join operation is performed in the remote database.

Adding an ORDERED hint to the SQL query can result in further optimization. This will break down the compound statement highlighted in the previous query plan into individual table data remote extraction, as shown in the following query plan.

SELECT   /*+ DRIVING_SITE(DUAL) ORDERED */ *
FROM     emp_subsidiary@richdb a,
         department@richdb,
         grade@richdb
WHERE    emp_grade < 1200
         AND emp_dept = dpt_id
         AND emp_grade = grd_id
         AND EXISTS ( SELECT ‘x’
                      FROM   dual)
ORDER BY emp_id

If you are familiar with Oracle Exadata, you may notice that the data retrieval process for REMOTE tables in remote database @richdb works similarly to that of the Exadata Storage Server.

It is important to remember that applying this technique to SQL queries with a DB Link is only beneficial in certain environments. For instance, it is ideal when the network speed is good, data traffic is not heavy, and the workload on the local database is low.

Tosska DB Ace for Oracle can automatically perform this type of rewrite, resulting in an SQL query that runs almost 10 times faster than the original.

Tosska DB Ace Enterprise for Oracle – Tosska Technologies Limited

DBAO Tune DB Link SQL – YouTube

How to Tune SQL with IN Subquery with Intersect for Oracle?

Here is an example SQL that retrieves data from EMPLOYEE and DEPARTMENT table with the employee’s grade code in the GRADE table.

SELECT emp_id,
       emp_name,
       dpt_name
FROM   employee,
       department
WHERE  emp_dept = dpt_id
       AND emp_grade IN (SELECT grd_id
                         FROM grade
                         WHERE grd_min_salary < 200000)
and emp_dept < ‘D’

Here the following is the query plan of this SQL, it takes 8.3 seconds to finish. The query plan shows a Hash Join with GRADE and EMPLOYEE and then hash join to DEPARTMENT. It looks like Oracle gave up any Nested Loops operations after the actual number of rows is returned from the GRADE table in this adaptive plan.

In order to ask Oracle to consider the Nested Loops operations, I added an extra Intersect operation in the subquery to rapidly narrow down the result set of grd_id returned from the GRADE table first.

SELECT emp_id,
       emp_name,
       dpt_name
FROM   employee,
       department
WHERE  emp_dept = dpt_id
       AND emp_grade IN (SELECT grd_id
                         FROM   grade
                         WHERE  grd_min_salary < 200000                          INTERSECT SELECT e1.emp_grade
                                   FROM employee e1
                                   WHERE emp_dept < ‘D’)
       AND emp_dept < ‘D’

The rewritten SQL generates a query plan that is entirely different from the original query plan, The new plan is using “Nested Loops” from DEPARTMENT to EMPLOYEE as the first steps and then Hash Join to the GRADE table. The new plan now takes 0.81 seconds only.


This kind of rewrite can be achieved by Tosska SQL Tuning Expert Pro for Oracle automatically, it shows that the rewrite is more than 10 times faster than the original SQL.

https://tosska.com/tosska-sql-tuning-expert-pro-tse-pro-for-oracle/

How to Tune SQL Statement with LCASE function on index field?

Some business requirements may need to compare the lower case of an indexed column to a given string as a data retrieval criterion.

Here is an example SQL that retrieves records from the EMPLOYEE table employee if the lower case of the name is equal to the string ‘richard’.

select  *
  from employee
where LCASE(emp_name)=‘richard’

Here the following are the query plans of this SQL, it takes 17 seconds to finish. The query shows a “Full Table Scan Employee”  

You can see that this SQL cannot utilize index scan even if the emp_name is an indexed field. Let me add a “Force Index(emp_name_inx)“hint to the SQL and hope it can help MySQL SQL optimizer to use index scan, but it fails to enable the index scan anyway, so I add one more dummy condition “emp_name >= ””, it is an always true condition that emp_name should be greater or equal to a smallest empty character, it is used to increase the cost of not using emp_name_inx index. There is another condition added “emp_name is null” to correct this condition if emp_name is a null value.

select  *
from   employee force index(EMPS_NAME_INX)
where  LCASE(emp_name) = ‘richard’
     and ( emp_name >=
        or emp_name is null )

Here is the query plan of the rewritten SQL and it is running much faster. The new query plan shows that an Index Scan is used now and takes 2.79 seconds only.

This kind of rewrite can be achieved by Tosska SQL Tuning Expert for MySQL automatically, it shows that the rewrite is more than 6 times faster than the original SQL.

https://tosska.com/tosska-sql-tuning-expert-tse-for-mysql-2/

How to use ROWID to improve an UPDATE statement for Oracle?

Here the following is an Update SQL with a subquery that updates the EMPLOYEE table if the emp_dept satisfies the records returned from a subquery.

update  employee
   set  emp_name = ‘testing’
 where  emp_dept IN (select dpt_id
            from department
          where dpt_name like ‘A%’)
and emp_grade>2000

You can see Oracle uses a Hash join of the DEPARTMENT table and EMPLOYEE table to execute the update process. This query plan takes 1.96 seconds to complete and no index is used even though emp_dept, dpt_id, and emp_grade are indexed columns. It looks like the most expansive operation is the Table Access Full scan of the EMPLOYEE table.

Let’s rewrite the SQL into the following syntax to eliminate EMPLOYEE’s Table Access Full operation from the query plan.  The new subquery with the italic Bold text is used to force the EMPLOYEE to extract records with emp_dept in the DEPARTMENT table with the dpt_name like ‘A%’. The ROWID returned from the EMPLOYEE(subquery) is to make sure a more efficient table ROWID access to the outer EMPLOYEE table.

UPDATE  employee
SET   emp_name=‘testing’
WHERE   ROWID IN (SELECT  ROWID
          FROM   employee
          WHERE  emp_dept IN (SELECT  dpt_id
                      FROM   department
                      WHERE  dpt_name LIKE‘A%’))
     AND emp_grade > 2000

You can see the final query plan with this syntax has a better cost without full table access to the EMPLOYEE table. The new syntax takes 0.9 seconds and it is more than 2 times faster than the original syntax.

This kind of rewrite can be achieved by Tosska SQL Tuning Expert Pro for Oracle automatically, there is another SQL rewrite with similar performance, but it is not suitable to discuss in this short article, maybe I can discuss it later in my blog.

https://tosska.com/tosska-sql-tuning-expert-pro-tse-pro-for-oracle/

How to build indexes for multiple Max() functions for SQL Server?

For some SQL statements with multiple Max() functions in the select list and nothing in the Where clause, we have different methods to create new indexes to improve the SQL speed.

Here is an example SQL, it is to retrieve the maximum name and age from the employee table.
select   max(emp_name),
     max(emp_age)
from  employee

The following is the query plan that takes 9.27 seconds.

The SQL cannot be tuned by SQL syntax rewrite or hints injection, and the SSMS cannot recommend any index to improve the SQL.

For this kind of SQL that we can consider building a composite index or two individual indexes for emp_name and emp_age. A new composite of these two columns (emp_age, emp_name) can improve the SQL around 7 times. The following is the query plan shows that the new composite index is used, but it has to scan the entire index for these two stream aggregate operations before getting the max(emp_name) and max(emp_age).

How about if we build two individual indexes for emp_name and emp_age. The following is the result and query plan of these two indexes created. A Top operator selects the first row from each index and returns to the Stream Aggregate operation, and then a Nested Loops join the two maximum results together. It is 356 times much faster than the original SQL.

This kind of indexes recommendation can be achieved by Tosska SQL Tuning Expert Pro for SQL Server automatically:
Tosska SQL Tuning Expert Pro (TSES Pro™) for SQL Server – Tosska Technologies Limited