The Overlooked Significance of Performance Deployment in Software Deployment

Performance deployment is a critical aspect of software deployment that is often undervalued. While it’s important to ensure that software is deployed correctly, it’s equally important to optimize its performance in the production environment. By recognizing the significance of performance deployment during the software deployment process, developers can ensure that their software performs well and meets the needs of its users. Focusing on performance deployment can help prevent performance issues and improve user satisfaction with the software.

The Missing Link of Performance Deployment Between Testing Database and Production Database
Despite extensive pre-deployment testing, there is still a chance of encountering performance problems in specific development environments during software performance deployment. The following issues may arise :

  1. Inability to copy production data to the testing database.
  2. Significant differences in hardware and software configuration between the testing and production databases.
  3. Inability to test software in the production database due to security restrictions.
  4. The utilization of DML SQL statements in the new software that may damage the data integrity of the production database.

It is not unusual for users to face performance issues or encounter application errors following a release of new application code.

Ensuring Performance Deployment with a Pre-Deployment Process
The subsequent instructions present a novel approach for guaranteeing performance reliability while deploying software. The idea is uncomplicated: since it is not feasible to run the new application code on the production database, why not obtain query plans for every SQL statement in the production database? This way, we can assess the performance of each SQL statement in the application code that is intended for deployment on the production database.

Suppose there are 10 SQL statements in the new application code that we need to identify in the testing database. In that case, we need to clear the shared pool and execute the new application in the testing database first to isolate these 10 statements. This process will enable us to capture and analyze the 10 SQL statements and obtain their query plan from the production database. The table below presents various potential outcomes resulting from the query plan comparison.

Observation Possible reasons
Explain Plan error in the production database The SQL statements requiring access to objects not present in the production database.
Query plan changes Significant statistical differences between the testing and production databases, including differences in the database schema. These schema differences may involve missing or new partitions and other changes affecting the database’s structure and organization. Benchmarking the SQL may be necessary due to the potential significant changes in performance.
Unused indexes Some indexes used in the testing database are not used in the production database. Benchmarking the SQL may be necessary due to the potential significant changes in performance.
New used indexes Some indexes used in the production database are not used in the testing database. Benchmarking the SQL may be necessary due to the potential significant changes in performance.
Total cost changes Changes in the overall query plan cost for the 10 SQL statements. If the production database has a larger data volume than the testing database, the cost change will be higher.

DBAO SQL Performance Tracker – YouTube
Tosska DB Ace Enterprise for Oracle – Tosska Technologies Limited

How to analyze the performance impact of a database environment change using what-if analysis ?

To properly evaluate the impact of a set of SQL statements during a database environment change, it’s crucial to have a thorough understanding of how SQL query performance can be impacted. There are two primary types of performance changes that can occur in SQL queries. The first type, which I refer to as “progressive change,” is typically caused by changes in statistics, such as fluctuations in data volume in relevant tables or index pages. If the statistics change is not significant enough to trigger a new query plan, the query plan will remain the same, and the performance of the SQL query will not be significantly altered compared to the original statistics.

The second type of performance change, which I refer to as “quantum change” occurs when a new query plan is introduced due to a significant change in statistics or schema. This type of change can have a major impact on performance, occasionally resulting in performance disasters.

When making changes to the database environment, it is essential to closely monitor the performance of SQL queries and take appropriate measures to optimize the affected statements. To track critical SQL statements before and after environment changes, there are some general steps you can follow :

  1. Extract the SQL statements along with their query plan and performance statistics from SGA or AWR.
  2. Apply the environment changes to the database, such as creating new indexes, gathering statistics, upgrading the database, or forecasting the performance of a software deployment in the production database.
  3. Obtain the query plan from the changed database environment.
  4. Compare the query plan for each SQL statement to identify any differences.
  5. Look for potential issues such as unused indexes, high-cost SQL changes, etc.
  6. Benchmark the query plan for the modified SQL statements to detect any degraded performance.

If you only have a small number of SQL statements to track in terms of performance before and after environment changes, the above steps can be done manually. However, it can be challenging if you have hundreds of SQL statements to monitor without a tool. Tosska DB Ace for Oracle is equipped with a robust tool that can assist you in tracking the performance differences of SQL statements between two databases.

DBAO SQL Performance Tracker – YouTube
Tosska DB Ace Enterprise for Oracle – Tosska Technologies Limited

How To Use 80/20 Rule To Tune A Database Application II ?

The previous article “How To Use 80/20 Rule To Tune A Database Application I “ demonstrated how the 80/20 Rule can be applied to evaluate the overall performance of SQL workload in a database. In this example, a set of 90 SQL statements retrieved from Oracle SGA is presented in a chart that lists each statement based on its resource usage in descending order, with the most resource-intensive SQL on the left. The analysis reveals that roughly 14.44% of the SQL statements consume 80% of the total elapsed time, while 21.11% of the SQL statements consume 80% of the total CPU time, indicating that the SQL workload distribution aligns well with the 80/20 rule. Therefore, tuning the SQL may not be necessary since it is unlikely to result in significant performance improvements.

However, to further optimize the database performance cost-effectively, it is recommended to conduct an in-depth investigation of the top 20% of high workload SQL statements. This will reveal that the resource utilization drops steeply in the first few SQL statements, making them the most critical candidates for optimization.

Let’s aim to reduce the proportion of the total resource consumption from 80% to 60% and examine the SQL statements that are responsible for utilizing the resources. The results are interesting and reveal that 3 SQL statements account for 60% of the elapsed time, 6 SQL statements account for 60% of the CPU time, and only one SQL statement accounts for 60% of the disk reads. By focusing on these SQL statements, it is possible to enhance up to 60% of the database workload. For instance, if the database is experiencing an IO bottleneck, concentrating on the one SQL statement could yield savings of up to 60% on disk reads.

You can utilize Excel to conduct a simulation of the 80/20 rule analysis described above, providing a comprehensive overview of the distribution of the SQL workload. This approach facilitates a rapid evaluation of the overall health of the database’s SQL performance, as well as the associated costs and benefits of optimizing high workload SQL statements. Furthermore, the SQL resource spectrum analysis is integrated into our Tosska DB Ace for Oracle software.

Tosska DB Ace Enterprise for Oracle – Tosska Technologies Limited

DBAO Inspect SQL – YouTube

How To Use 80/20 Rule To Tune A Database Application I ?

In 2002, Microsoft reported that the majority of errors and crashes in their software were caused by a minority of the bugs they detected, specifically about 20%. This phenomenon, known as the Pareto principle or the eighty-twenty rule, is also relevant to the analysis of database SQL behavior. It is assumed that only 20% of SQL statements should account for 80% of the resource consumption of the entire database system. Any application system in which less than 20% of SQL statements consume more than 80% of database resources is considered abnormal and should be reviewed and optimized.

Using a SQL resource spectrum analysis technique, users have the ability to set a resource threshold percentage, typically around 80%, to pinpoint the top M% of SQL statements that are responsible for the majority of resource consumption. If the analysis reveals that only a few SQL statements, such as only 4%, are responsible for over 80% of the total resource consumption, it suggests that optimizing these specific SQL statements has the potential to significantly enhance the performance of the entire database.

The presented example displays a collection of 90 SQL statements obtained from Oracle SGA, with each statement listed in a chart based on its resource usage, arranged in descending order starting from the highest consumption SQL on the left-hand side. The analysis reveals that approximately 14.44% of the SQL statements account for 80% of the total elapsed time, while 21.11% of the SQL statements account for 80% of the total CPU time. This indicates that the distribution of the SQL workload aligns closely with the 80/20 rule. Therefore, there may not be a pressing need for SQL tuning as it is unlikely to result in significant performance improvements.

Another example highlights a significant deviation from the 80/20 rule in the distribution of the SQL workload. The analysis shows that around 4 SQL statements are accountable for 80% of the total elapsed time, while another 4 SQL statements are responsible for 80% of the total CPU time, and 2 SQL statements contribute to 80% of the total disk reads. This indicates that by optimizing just these 4 SQL statements, it is possible to achieve a substantial improvement in the overall performance of the database.

You can utilize Excel to conduct a simulation of the 80/20 rule analysis described above, providing a comprehensive overview of the distribution of the SQL workload. This approach facilitates a rapid evaluation of the overall health of the database’s SQL performance, as well as the associated costs and benefits of optimizing high workload SQL statements. Furthermore, the SQL resource spectrum analysis is integrated into our Tosska DB Ace for Oracle software.

Tosska DB Ace Enterprise for Oracle – Tosska Technologies Limited

DBAO Inspect SQL – YouTube

How to Tune SQL with DB Link for Oracle II ?

Here is an example SQL,  the query is retrieving employee, department, and grade tables from the remote database @richdb

SELECT   *
FROM     emp_subsidiary@richdb a,
         department@richdb,
         grade@richdb
WHERE    emp_grade < 1200
         AND emp_dept = dpt_id
         AND emp_grade = grd_id
ORDER BY emp_id

Here the following is the query plan of this SQL, it takes 15.92 seconds to finish.  The first step of the query plan is ‘SELECT STATEMENT REMOTE’, it means the entire query will be execute on the remote database @richdb and the result will be sent back to the local database. The query plan is a little bit complicated and not easy to tell it is optimal or not. But one thing we can try if the query is partially executed in the local database @local.

In order to request that Oracle perform certain join operations in the local database, the SQL query must include at least one table that is executed in the local database. This allows the use of the hint /*+ DRIVING_SITE ( [ @ queryblock ] tablespec ) */ in the SQL query. If no tables are explicitly executed in the local database, there is no means to request that Oracle attempt to perform join operations in the local database.

Let’s added a dummy condition “EXISTS (SELECT ‘X’ FROM DUAL)” and a hints /*+ DRIVING_SITE(DUAL) */ to the SQL to force Oracle to execute some join operations in the local database.

SELECT   /*+ DRIVING_SITE(DUAL) */ *
FROM     emp_subsidiary@richdb a,
         department@richdb,
         grade@richdb
WHERE    emp_grade < 1200
         AND emp_dept = dpt_id
         AND emp_grade = grd_id
         AND EXISTS ( SELECT ‘x’
                      FROM   dual)
ORDER BY emp_id

Below is the query plan for the modified SQL, which takes 4.08 seconds and is approximately 4 times faster than the original SQL statement where only one join operation is performed in the remote database.

Adding an ORDERED hint to the SQL query can result in further optimization. This will break down the compound statement highlighted in the previous query plan into individual table data remote extraction, as shown in the following query plan.

SELECT   /*+ DRIVING_SITE(DUAL) ORDERED */ *
FROM     emp_subsidiary@richdb a,
         department@richdb,
         grade@richdb
WHERE    emp_grade < 1200
         AND emp_dept = dpt_id
         AND emp_grade = grd_id
         AND EXISTS ( SELECT ‘x’
                      FROM   dual)
ORDER BY emp_id

If you are familiar with Oracle Exadata, you may notice that the data retrieval process for REMOTE tables in remote database @richdb works similarly to that of the Exadata Storage Server.

It is important to remember that applying this technique to SQL queries with a DB Link is only beneficial in certain environments. For instance, it is ideal when the network speed is good, data traffic is not heavy, and the workload on the local database is low.

Tosska DB Ace for Oracle can automatically perform this type of rewrite, resulting in an SQL query that runs almost 10 times faster than the original.

Tosska DB Ace Enterprise for Oracle – Tosska Technologies Limited

DBAO Tune DB Link SQL – YouTube

How to Tune SQL with DB Link for Oracle I?

Here is an example SQL query used to calculate the average salary of employees on the remote database @richdb in each department in the local database whose department name starts with the letter “D”.

SELECT   Avg(emp_salary),
         emp_dept
FROM     employee@richdb
WHERE    emp_dept IN (SELECT dpt_id
                      FROM   department
                      WHERE  dpt_name LIKE ‘D%’)
GROUP BY emp_dept

Here the following is the query plan of this SQL, it takes 9.16 seconds to finish.  The query plan shows a Nested Loops from DEPARTMENT in local to EMPLOYEE in the remote database. Due to the size of the EMPLOYEE table being much larger than that of the DEPARTMENT table, the nested loop join path is not optimal in this case.

To ask Oracle to consider doing the join operation in the remote database @richdb, I added a Hint  /*+ DRIVING_SITE(employee) */ to tell Oracle to use EMPLOYEE table’s database @richdb as the driving site for the distributed query.

SELECT   /*+ DRIVING_SITE(employee) */ Avg(emp_salary),
         emp_dept
FROM     employee@richdb
WHERE    emp_dept IN (SELECT dpt_id
                      FROM   department
                      WHERE  dpt_name LIKE ‘D%’)
GROUP BY emp_dept

The following query shows the driving site is changed to @richdb and remote retrieves DEPARTMENT data from the “local” database. Now the speed is improved to 5.94 seconds. But the query plan shows a little bit complicated, there is a view that is construed by a Hash Join of two “index fast full scan” of indexes from EMPLOYEE and DEPARTMENT.

I further change the SQL and added a dummy operation Coalesce(dpt_id,dpt_id) in the select list of the subquery to block the index fast full scan of the DEMPARTMENT table.

SELECT   /*+ DRIVING_SITE(employee) */ Avg(emp_salary),
         emp_dept
FROM     employee@richdb
WHERE    emp_dept IN (SELECT Coalesce(dpt_id,dpt_id)
                      FROM   department
                      WHERE  dpt_name LIKE ‘D%’)
GROUP BY emp_dept

The change gives the SQL a new query plan shown in the following, the performance significantly improved to 0.71 seconds. You can learn how the dummy operation Coalesce(dpt_id,dpt_id) affected the Oracle SQL optimizer decision in this example.

This kind of rewrite can be achieved by Tosska DB Ace for Oracle automatically, it shows that the rewrite is almost 13 times faster than the original SQL.

Tosska DB Ace Enterprise for Oracle – Tosska Technologies Limited

How to Tune SQL with IN Subquery with Intersect for Oracle?

Here is an example SQL that retrieves data from EMPLOYEE and DEPARTMENT table with the employee’s grade code in the GRADE table.

SELECT emp_id,
       emp_name,
       dpt_name
FROM   employee,
       department
WHERE  emp_dept = dpt_id
       AND emp_grade IN (SELECT grd_id
                         FROM grade
                         WHERE grd_min_salary < 200000)
and emp_dept < ‘D’

Here the following is the query plan of this SQL, it takes 8.3 seconds to finish. The query plan shows a Hash Join with GRADE and EMPLOYEE and then hash join to DEPARTMENT. It looks like Oracle gave up any Nested Loops operations after the actual number of rows is returned from the GRADE table in this adaptive plan.

In order to ask Oracle to consider the Nested Loops operations, I added an extra Intersect operation in the subquery to rapidly narrow down the result set of grd_id returned from the GRADE table first.

SELECT emp_id,
       emp_name,
       dpt_name
FROM   employee,
       department
WHERE  emp_dept = dpt_id
       AND emp_grade IN (SELECT grd_id
                         FROM   grade
                         WHERE  grd_min_salary < 200000                          INTERSECT SELECT e1.emp_grade
                                   FROM employee e1
                                   WHERE emp_dept < ‘D’)
       AND emp_dept < ‘D’

The rewritten SQL generates a query plan that is entirely different from the original query plan, The new plan is using “Nested Loops” from DEPARTMENT to EMPLOYEE as the first steps and then Hash Join to the GRADE table. The new plan now takes 0.81 seconds only.


This kind of rewrite can be achieved by Tosska SQL Tuning Expert Pro for Oracle automatically, it shows that the rewrite is more than 10 times faster than the original SQL.

https://tosska.com/tosska-sql-tuning-expert-pro-tse-pro-for-oracle/

How to Tune SQL Statement with CASE Expression by Hints Injection for Oracle?

Here the following is a simple SQL statement with a CASE expression syntax.

SELECT *
FROM   employee
WHERE
      CASE
      WHEN emp_salary< 1000
      THEN  ‘low’
      WHEN emp_salary>100000
      THEN  ‘high’
      ELSE  ‘Normal’
      END = ‘low’

Here the following are the query plans of this SQL, it takes 4.64 seconds to finish. The query shows a Full Table Scan of the EMPLOYEE table due to the CASE expression cannot utilize the emp_salary index. It is because the CASE statement disabled the index range search of the emp_salary index.

Commonly, we will try to enable index search by forcing the SQL with an Index hint as the following:

SELECT/*+ INDEX(@SEL$1 EMPLOYEE) */ *
FROM   employee
WHERE CASE
      WHEN emp_salary < 1000
      THEN  ‘low’
      WHEN emp_salary > 100000
      THEN  ‘high’
      ELSE  ‘Normal’
     END = ‘low’

Although the CASE statement disabled the index range search of the emp_salary index, an index full scan is now enabled to help filter the result more quickly compared with the original full table scan of the EMPLOYEE table.

This hint injection takes 0.38 seconds and it is 12 times faster than the original SQL will full table scan. For this kind of SQL statement that you cannot change your source code, you can use SQL Patch with the hints and SQL text deployed to the database without the need of changing your source code.

If you can modify your source code, the best performance will be to rewrite the CASE expression into the following syntax with multiple OR conditions.

SELECT *
FROM   employee
WHERE emp_salary < 1000
     AND ‘low’ = ‘low’
     OR NOT  ( emp_salary < 1000 )
        AND  emp_salary > 100000
        AND  ‘high’ = ‘low’
     OR NOT  ( emp_salary < 1000
           OR emp_salary > 100000 )
        AND  ‘Normal’ = ‘low’

The new query plan shows an INDEX RANGE SCAN OF emp_salary index.

This kind of rewrite and hints injection can be achieved by Tosska SQL Tuning Expert Pro for Oracle automatically,

https://tosska.com/tosska-sql-tuning-expert-pro-tse-pro-for-oracle/

How to use ROWID to improve an UPDATE statement for Oracle?

Here the following is an Update SQL with a subquery that updates the EMPLOYEE table if the emp_dept satisfies the records returned from a subquery.

update  employee
   set  emp_name = ‘testing’
 where  emp_dept IN (select dpt_id
            from department
          where dpt_name like ‘A%’)
and emp_grade>2000

You can see Oracle uses a Hash join of the DEPARTMENT table and EMPLOYEE table to execute the update process. This query plan takes 1.96 seconds to complete and no index is used even though emp_dept, dpt_id, and emp_grade are indexed columns. It looks like the most expansive operation is the Table Access Full scan of the EMPLOYEE table.

Let’s rewrite the SQL into the following syntax to eliminate EMPLOYEE’s Table Access Full operation from the query plan.  The new subquery with the italic Bold text is used to force the EMPLOYEE to extract records with emp_dept in the DEPARTMENT table with the dpt_name like ‘A%’. The ROWID returned from the EMPLOYEE(subquery) is to make sure a more efficient table ROWID access to the outer EMPLOYEE table.

UPDATE  employee
SET   emp_name=‘testing’
WHERE   ROWID IN (SELECT  ROWID
          FROM   employee
          WHERE  emp_dept IN (SELECT  dpt_id
                      FROM   department
                      WHERE  dpt_name LIKE‘A%’))
     AND emp_grade > 2000

You can see the final query plan with this syntax has a better cost without full table access to the EMPLOYEE table. The new syntax takes 0.9 seconds and it is more than 2 times faster than the original syntax.

This kind of rewrite can be achieved by Tosska SQL Tuning Expert Pro for Oracle automatically, there is another SQL rewrite with similar performance, but it is not suitable to discuss in this short article, maybe I can discuss it later in my blog.

https://tosska.com/tosska-sql-tuning-expert-pro-tse-pro-for-oracle/

How to index SQL with aggregate function SQL for Oracle?

Here the following is an example SQL shows you that select the maximum emp_address which is not indexed in the EMPLOYEE table with 3 million records, the emp_grade is an indexed column.

select max(emp_address) from employee a
where emp_grade<4000

As 80% of the EMPLOYEE table’s records will be retrieved to examine the maximum emp_address string. The query plan of this SQL shows a Table Access Full on EMPLOYEE table is reasonable.

How many ways to build an index to improve this SQL?
Although it is simple SQL, there are still 3 ways to build an index to improve this SQL, the following are the possible indexes that can be built for the SQL, the first one is a single column index and the 2 and 3 are the composite index with a different order.
1. EMP_ADDRESS
2. EMP_GRADE, EMP_ADDRESS
3. EMP_ADDRESS, EMP_GRADE

Most people may use the EMP_ADDRESS as the first choice to improve this SQL, let’s see what the query plan is if we build a virtual index for the EMP_ADDRESS column in the following, you can see the estimated cost is reduced by almost half, but this query plan is finally not being used after the physical index is built for benchmarking due to actual statistics is collected.

The following query shows the EMP_ADDRESS index is not used and the query plan is the same as the original SQL without any new index built.

Let’s try the second composite index (EMP_GRADE, EMP_ADDRESS), the new query plan shows an Index Fast Full Scan of this index, it is a reasonable plan which no table’s data is needed to retrieve. So, the execution time is reduced from 16.83 seconds to 3.89 seconds.

Let’s test the last composite index (EMP_ADDRESS, EMP_GRADE) that EMP_ADDRESS is placed as the first column in the composite index, it creates a new query plan that shows an extra FIRST ROW operation for the INDEX FULL SCAN (MIN/MAX), it highly reduces the execution time from 16.83 seconds to 0.08 seconds.

So, indexing sometimes is an art that needs you to pay more attention to it, some potential solutions may perform excess your expectation.

The best index solution is now more than 200 times better than the original SQL without index, this kind of index recommendation can be achieved by Tosska SQL Tuning Expert for Oracle automatically.

https://tosska.com/tosska-sql-tuning-expert-pro-tse-pro-for-oracle/