The Overlooked Significance of Performance Deployment in Software Deployment

Performance deployment is a critical aspect of software deployment that is often undervalued. While it’s important to ensure that software is deployed correctly, it’s equally important to optimize its performance in the production environment. By recognizing the significance of performance deployment during the software deployment process, developers can ensure that their software performs well and meets the needs of its users. Focusing on performance deployment can help prevent performance issues and improve user satisfaction with the software.

The Missing Link of Performance Deployment Between Testing Database and Production Database
Despite extensive pre-deployment testing, there is still a chance of encountering performance problems in specific development environments during software performance deployment. The following issues may arise :

  1. Inability to copy production data to the testing database.
  2. Significant differences in hardware and software configuration between the testing and production databases.
  3. Inability to test software in the production database due to security restrictions.
  4. The utilization of DML SQL statements in the new software that may damage the data integrity of the production database.

It is not unusual for users to face performance issues or encounter application errors following a release of new application code.

Ensuring Performance Deployment with a Pre-Deployment Process
The subsequent instructions present a novel approach for guaranteeing performance reliability while deploying software. The idea is uncomplicated: since it is not feasible to run the new application code on the production database, why not obtain query plans for every SQL statement in the production database? This way, we can assess the performance of each SQL statement in the application code that is intended for deployment on the production database.

Suppose there are 10 SQL statements in the new application code that we need to identify in the testing database. In that case, we need to clear the shared pool and execute the new application in the testing database first to isolate these 10 statements. This process will enable us to capture and analyze the 10 SQL statements and obtain their query plan from the production database. The table below presents various potential outcomes resulting from the query plan comparison.

Observation Possible reasons
Explain Plan error in the production database The SQL statements requiring access to objects not present in the production database.
Query plan changes Significant statistical differences between the testing and production databases, including differences in the database schema. These schema differences may involve missing or new partitions and other changes affecting the database’s structure and organization. Benchmarking the SQL may be necessary due to the potential significant changes in performance.
Unused indexes Some indexes used in the testing database are not used in the production database. Benchmarking the SQL may be necessary due to the potential significant changes in performance.
New used indexes Some indexes used in the production database are not used in the testing database. Benchmarking the SQL may be necessary due to the potential significant changes in performance.
Total cost changes Changes in the overall query plan cost for the 10 SQL statements. If the production database has a larger data volume than the testing database, the cost change will be higher.

DBAO SQL Performance Tracker – YouTube
Tosska DB Ace Enterprise for Oracle – Tosska Technologies Limited

How to analyze the performance impact of a database environment change using what-if analysis ?

To properly evaluate the impact of a set of SQL statements during a database environment change, it’s crucial to have a thorough understanding of how SQL query performance can be impacted. There are two primary types of performance changes that can occur in SQL queries. The first type, which I refer to as “progressive change,” is typically caused by changes in statistics, such as fluctuations in data volume in relevant tables or index pages. If the statistics change is not significant enough to trigger a new query plan, the query plan will remain the same, and the performance of the SQL query will not be significantly altered compared to the original statistics.

The second type of performance change, which I refer to as “quantum change” occurs when a new query plan is introduced due to a significant change in statistics or schema. This type of change can have a major impact on performance, occasionally resulting in performance disasters.

When making changes to the database environment, it is essential to closely monitor the performance of SQL queries and take appropriate measures to optimize the affected statements. To track critical SQL statements before and after environment changes, there are some general steps you can follow :

  1. Extract the SQL statements along with their query plan and performance statistics from SGA or AWR.
  2. Apply the environment changes to the database, such as creating new indexes, gathering statistics, upgrading the database, or forecasting the performance of a software deployment in the production database.
  3. Obtain the query plan from the changed database environment.
  4. Compare the query plan for each SQL statement to identify any differences.
  5. Look for potential issues such as unused indexes, high-cost SQL changes, etc.
  6. Benchmark the query plan for the modified SQL statements to detect any degraded performance.

If you only have a small number of SQL statements to track in terms of performance before and after environment changes, the above steps can be done manually. However, it can be challenging if you have hundreds of SQL statements to monitor without a tool. Tosska DB Ace for Oracle is equipped with a robust tool that can assist you in tracking the performance differences of SQL statements between two databases.

DBAO SQL Performance Tracker – YouTube
Tosska DB Ace Enterprise for Oracle – Tosska Technologies Limited

How To Use 80/20 Rule To Tune A Database Application II ?

The previous article “How To Use 80/20 Rule To Tune A Database Application I “ demonstrated how the 80/20 Rule can be applied to evaluate the overall performance of SQL workload in a database. In this example, a set of 90 SQL statements retrieved from Oracle SGA is presented in a chart that lists each statement based on its resource usage in descending order, with the most resource-intensive SQL on the left. The analysis reveals that roughly 14.44% of the SQL statements consume 80% of the total elapsed time, while 21.11% of the SQL statements consume 80% of the total CPU time, indicating that the SQL workload distribution aligns well with the 80/20 rule. Therefore, tuning the SQL may not be necessary since it is unlikely to result in significant performance improvements.

However, to further optimize the database performance cost-effectively, it is recommended to conduct an in-depth investigation of the top 20% of high workload SQL statements. This will reveal that the resource utilization drops steeply in the first few SQL statements, making them the most critical candidates for optimization.

Let’s aim to reduce the proportion of the total resource consumption from 80% to 60% and examine the SQL statements that are responsible for utilizing the resources. The results are interesting and reveal that 3 SQL statements account for 60% of the elapsed time, 6 SQL statements account for 60% of the CPU time, and only one SQL statement accounts for 60% of the disk reads. By focusing on these SQL statements, it is possible to enhance up to 60% of the database workload. For instance, if the database is experiencing an IO bottleneck, concentrating on the one SQL statement could yield savings of up to 60% on disk reads.

You can utilize Excel to conduct a simulation of the 80/20 rule analysis described above, providing a comprehensive overview of the distribution of the SQL workload. This approach facilitates a rapid evaluation of the overall health of the database’s SQL performance, as well as the associated costs and benefits of optimizing high workload SQL statements. Furthermore, the SQL resource spectrum analysis is integrated into our Tosska DB Ace for Oracle software.

Tosska DB Ace Enterprise for Oracle – Tosska Technologies Limited

DBAO Inspect SQL – YouTube

How to Tune SQL with DB Link for Oracle I?

Here is an example SQL query used to calculate the average salary of employees on the remote database @richdb in each department in the local database whose department name starts with the letter “D”.

SELECT   Avg(emp_salary),
         emp_dept
FROM     employee@richdb
WHERE    emp_dept IN (SELECT dpt_id
                      FROM   department
                      WHERE  dpt_name LIKE ‘D%’)
GROUP BY emp_dept

Here the following is the query plan of this SQL, it takes 9.16 seconds to finish.  The query plan shows a Nested Loops from DEPARTMENT in local to EMPLOYEE in the remote database. Due to the size of the EMPLOYEE table being much larger than that of the DEPARTMENT table, the nested loop join path is not optimal in this case.

To ask Oracle to consider doing the join operation in the remote database @richdb, I added a Hint  /*+ DRIVING_SITE(employee) */ to tell Oracle to use EMPLOYEE table’s database @richdb as the driving site for the distributed query.

SELECT   /*+ DRIVING_SITE(employee) */ Avg(emp_salary),
         emp_dept
FROM     employee@richdb
WHERE    emp_dept IN (SELECT dpt_id
                      FROM   department
                      WHERE  dpt_name LIKE ‘D%’)
GROUP BY emp_dept

The following query shows the driving site is changed to @richdb and remote retrieves DEPARTMENT data from the “local” database. Now the speed is improved to 5.94 seconds. But the query plan shows a little bit complicated, there is a view that is construed by a Hash Join of two “index fast full scan” of indexes from EMPLOYEE and DEPARTMENT.

I further change the SQL and added a dummy operation Coalesce(dpt_id,dpt_id) in the select list of the subquery to block the index fast full scan of the DEMPARTMENT table.

SELECT   /*+ DRIVING_SITE(employee) */ Avg(emp_salary),
         emp_dept
FROM     employee@richdb
WHERE    emp_dept IN (SELECT Coalesce(dpt_id,dpt_id)
                      FROM   department
                      WHERE  dpt_name LIKE ‘D%’)
GROUP BY emp_dept

The change gives the SQL a new query plan shown in the following, the performance significantly improved to 0.71 seconds. You can learn how the dummy operation Coalesce(dpt_id,dpt_id) affected the Oracle SQL optimizer decision in this example.

This kind of rewrite can be achieved by Tosska DB Ace for Oracle automatically, it shows that the rewrite is almost 13 times faster than the original SQL.

Tosska DB Ace Enterprise for Oracle – Tosska Technologies Limited

How to Tune SQL Statement with CASE Expression by Hints Injection for Oracle?

Here the following is a simple SQL statement with a CASE expression syntax.

SELECT *
FROM   employee
WHERE
      CASE
      WHEN emp_salary< 1000
      THEN  ‘low’
      WHEN emp_salary>100000
      THEN  ‘high’
      ELSE  ‘Normal’
      END = ‘low’

Here the following are the query plans of this SQL, it takes 4.64 seconds to finish. The query shows a Full Table Scan of the EMPLOYEE table due to the CASE expression cannot utilize the emp_salary index. It is because the CASE statement disabled the index range search of the emp_salary index.

Commonly, we will try to enable index search by forcing the SQL with an Index hint as the following:

SELECT/*+ INDEX(@SEL$1 EMPLOYEE) */ *
FROM   employee
WHERE CASE
      WHEN emp_salary < 1000
      THEN  ‘low’
      WHEN emp_salary > 100000
      THEN  ‘high’
      ELSE  ‘Normal’
     END = ‘low’

Although the CASE statement disabled the index range search of the emp_salary index, an index full scan is now enabled to help filter the result more quickly compared with the original full table scan of the EMPLOYEE table.

This hint injection takes 0.38 seconds and it is 12 times faster than the original SQL will full table scan. For this kind of SQL statement that you cannot change your source code, you can use SQL Patch with the hints and SQL text deployed to the database without the need of changing your source code.

If you can modify your source code, the best performance will be to rewrite the CASE expression into the following syntax with multiple OR conditions.

SELECT *
FROM   employee
WHERE emp_salary < 1000
     AND ‘low’ = ‘low’
     OR NOT  ( emp_salary < 1000 )
        AND  emp_salary > 100000
        AND  ‘high’ = ‘low’
     OR NOT  ( emp_salary < 1000
           OR emp_salary > 100000 )
        AND  ‘Normal’ = ‘low’

The new query plan shows an INDEX RANGE SCAN OF emp_salary index.

This kind of rewrite and hints injection can be achieved by Tosska SQL Tuning Expert Pro for Oracle automatically,

https://tosska.com/tosska-sql-tuning-expert-pro-tse-pro-for-oracle/

How to use ROWID to improve an UPDATE statement for Oracle?

Here the following is an Update SQL with a subquery that updates the EMPLOYEE table if the emp_dept satisfies the records returned from a subquery.

update  employee
   set  emp_name = ‘testing’
 where  emp_dept IN (select dpt_id
            from department
          where dpt_name like ‘A%’)
and emp_grade>2000

You can see Oracle uses a Hash join of the DEPARTMENT table and EMPLOYEE table to execute the update process. This query plan takes 1.96 seconds to complete and no index is used even though emp_dept, dpt_id, and emp_grade are indexed columns. It looks like the most expansive operation is the Table Access Full scan of the EMPLOYEE table.

Let’s rewrite the SQL into the following syntax to eliminate EMPLOYEE’s Table Access Full operation from the query plan.  The new subquery with the italic Bold text is used to force the EMPLOYEE to extract records with emp_dept in the DEPARTMENT table with the dpt_name like ‘A%’. The ROWID returned from the EMPLOYEE(subquery) is to make sure a more efficient table ROWID access to the outer EMPLOYEE table.

UPDATE  employee
SET   emp_name=‘testing’
WHERE   ROWID IN (SELECT  ROWID
          FROM   employee
          WHERE  emp_dept IN (SELECT  dpt_id
                      FROM   department
                      WHERE  dpt_name LIKE‘A%’))
     AND emp_grade > 2000

You can see the final query plan with this syntax has a better cost without full table access to the EMPLOYEE table. The new syntax takes 0.9 seconds and it is more than 2 times faster than the original syntax.

This kind of rewrite can be achieved by Tosska SQL Tuning Expert Pro for Oracle automatically, there is another SQL rewrite with similar performance, but it is not suitable to discuss in this short article, maybe I can discuss it later in my blog.

https://tosska.com/tosska-sql-tuning-expert-pro-tse-pro-for-oracle/

How to index SQL with aggregate function SQL for Oracle?

Here the following is an example SQL shows you that select the maximum emp_address which is not indexed in the EMPLOYEE table with 3 million records, the emp_grade is an indexed column.

select max(emp_address) from employee a
where emp_grade<4000

As 80% of the EMPLOYEE table’s records will be retrieved to examine the maximum emp_address string. The query plan of this SQL shows a Table Access Full on EMPLOYEE table is reasonable.

How many ways to build an index to improve this SQL?
Although it is simple SQL, there are still 3 ways to build an index to improve this SQL, the following are the possible indexes that can be built for the SQL, the first one is a single column index and the 2 and 3 are the composite index with a different order.
1. EMP_ADDRESS
2. EMP_GRADE, EMP_ADDRESS
3. EMP_ADDRESS, EMP_GRADE

Most people may use the EMP_ADDRESS as the first choice to improve this SQL, let’s see what the query plan is if we build a virtual index for the EMP_ADDRESS column in the following, you can see the estimated cost is reduced by almost half, but this query plan is finally not being used after the physical index is built for benchmarking due to actual statistics is collected.

The following query shows the EMP_ADDRESS index is not used and the query plan is the same as the original SQL without any new index built.

Let’s try the second composite index (EMP_GRADE, EMP_ADDRESS), the new query plan shows an Index Fast Full Scan of this index, it is a reasonable plan which no table’s data is needed to retrieve. So, the execution time is reduced from 16.83 seconds to 3.89 seconds.

Let’s test the last composite index (EMP_ADDRESS, EMP_GRADE) that EMP_ADDRESS is placed as the first column in the composite index, it creates a new query plan that shows an extra FIRST ROW operation for the INDEX FULL SCAN (MIN/MAX), it highly reduces the execution time from 16.83 seconds to 0.08 seconds.

So, indexing sometimes is an art that needs you to pay more attention to it, some potential solutions may perform excess your expectation.

The best index solution is now more than 200 times better than the original SQL without index, this kind of index recommendation can be achieved by Tosska SQL Tuning Expert for Oracle automatically.

https://tosska.com/tosska-sql-tuning-expert-pro-tse-pro-for-oracle/

How is the order of the columns in a composite index affecting a subquery performance for Oracle?

MySQL database and sql

We know the order of the columns in a composite index will determine the usage of the index or not against a table. A query will use a composite index only if the where clause of the query has at least the leading/left-most columns of the index in it. But, it is far more complicated in correlated subquery situations. Let’s have an example SQL to elaborate the details in the following.

SELECT D.*
FROM   department D
WHERE EXISTS (SELECT    Count(*)
         FROM     employee E
         WHERE     E.emp_id < 1050000
                AND E.emp_dept = D.dpt_id
         GROUP BY  E.emp_dept
         HAVING    Count(*) > 124)

Here the following is the query plan of the SQL, it takes 10 seconds to finish. We can see that the SQL can utilize E.emp_id and E.emp_dept indexes individually.

Let’s see if a new composite index can help to improve the SQL’s performance or not, as a rule of thumb, a higher selectivity column E.emp_id will be set as the first column in a composite index (E.emp_id, E.emp_dept).

The following is the query plan of a new composite index (E.emp_id, E.emp_dept) and the result performance is not good, it takes 11.8 seconds and it is even worse than the original query plan.

If we change the order of the columns in the composite index to (E.emp_dept, E.emp_id), the following query plan is generated and the speed is improved to 0.31 seconds.

The above two query plans are similar, the only difference is the “2” operation. The first composite index with first column E.emp_id uses an INDEX RANGE SCAN of the new composite index, but the second query plan uses an INDEX SKIP SCAN for the first column of E.emp_dept composite index. You can see there is an extra filter operation for E.emp_dept in the Predicate Information of INDEX RANGE SCAN of the index (E.emp_id, E.emp_dept). But the (E.emp_dept, E.emp_id) composite index use INDEX SKIP SCAN without extra operation to filter the E.emp_dept again.

So, you have to test the order of composite index very carefully for correlated subqueries, sometimes it will give you improvements that exceed your expectation.

This kind of index recommendation can be achieved by Tosska SQL Tuning Expert for Oracle automatically.

https://tosska.com/tosska-sql-tuning-expert-pro-tse-pro-for-oracle/

How to use ORDERED Hint to Tune a SQL with subquery for Oracle?

Here the following is the description of the ORDERED hint.

The ORDERED hint causes Oracle to join tables in the order in which they appear in the FROM clause.

If you omit the ORDERED hint from a SQL statement performing a join, then the optimizer chooses the order in which to join the tables. You might want to use the ORDERED hint to specify a join order if you know something about the number of rows selected from each table that the optimizer does not. Such information lets you choose an inner and outer table better than the optimizer could.

We usually use an ORDERED hint to control the john order, but how this hint causes a SQL with a subquery. Let’s use the following SQL as an example to see how ORDERED hint works for a subquery.

SELECT *
     FROM DEPARTMENT
where  dpt_id
     in (select emp_dept from employee
      where emp_id >3300000)

Here the following is the query plan of the SQL, it takes 68.84 seconds to finish. The query shows a “TABLE ACCESS FULL” of the DEPARTMENT table and “NESTED LOOPS SEMI” to an “INDEX RANGE SCAN” of EMPLOYEE.

If you think it is not an effective plan, you may want to try to reorder the join path and see if an ORDERED hint is working or not in a subquery case like this:

SELECT  /*+ ORDERED */ *
FROM  department
WHERE  dpt_id IN (SELECT  emp_dept
         FROM  employee
         WHERE  emp_id > 3300000)

Here is the query plan of the hinted SQL and the speed is 3.44 seconds which is 20 times better than the original SQL. The new query plan shows the new join order that EMPLOYEE is retrieve first and then hash join DEPARTMENT later. You can see the ORDERED hint will order the subquery’s table first. This new order clauses a new data retrieval method from the EMPLOYEE table, it makes the overall performance much better than the original query plan.

This kind of rewrite can be achieved by Tosska SQL Tuning Expert for Oracle automatically, there are other hints-injection SQL with better performance, but it is not suitable to discuss in this short article, maybe I can discuss later in my blog.

https://tosska.com/tosska-sql-tuning-expert-pro-tse-pro-for-oracle/

How to tune a SQL that cannot be tuned ?

oracle sql performance tuning

Some mission-critical SQL statements are already reached their maximum speed within the current indexes configuration.  It means that those SQL statements are not able to be improved by syntax rewrite or Hints injection. Most people may think that the only way to improve this kind of SQL may be by upgrading hardware.  For example, the following SQL statement has every column in WHERE clause is indexed and the best query plan is generated by Oracle already. There is no syntax rewrite or hints injection that can help Oracle to improve the SQL performance.

SELECT EMP_ID,
    EMP_NAME,
    SAL_EFFECT_DATE,
    SAL_SALARY
  FROM EMPLOYEE,
    EMP_SAL_HIST,
    DEPARTMENT,
    GRADE
WHERE EMP_ID = SAL_EMP_ID
  AND SAL_SALARY <200000
  AND EMP_DEPT = DPT_ID
  AND EMP_GRADE = GRD_ID 
  AND GRD_ID<1200    AND EMP_DEPT<‘D’

Here the following is the query plan and execution statistics of the SQL, it takes 2.33 seconds to extract all 502 records. It is not acceptable for a mission-critical SQL that is executed thousands of times in an hour. Do we have another choice if we don’t want to buy extra hardware to improve this SQL?

Introduce new plans for Oracle’s SQL optimizer to consider
Although all columns in the WHERE clause are indexed, can we build some compound indexes to help Oracle’s SQL optimizer to generate new query plans which may perform better than the original plan? Let’s see if we adopt the common practice that the following EMPLOYEE’s columns in red color can be used to compose a concatenated index (EMP_ID, EMP_DEPT, EMP_GRADE).

WHERE  EMP_ID = SAL_EMP_ID
  AND  SAL_SALARY <200000
  AND  EMP_DEPT = DPT_ID
  AND  EMP_GRADE = GRD_ID 
  AND  GRD_ID<1200
  AND  EMP_DEPT<‘D’

CREATE INDEX C##TOSSKA.TOSSKA_09145226686_V0043 ON C##TOSSKA.EMPLOYEE
(
 EMP_ID,
 EMP_DEPT,
 EMP_GRADE
)

The following is the query plan after the concatenated index is created. Unfortunately, the speed of the SQL is 2.40 seconds although a new query plan is introduced by Oracle’s SQL optimizer.

To be honest, it is difficult if we just rely on common practices or human knowledge to build indexes to improve this SQL. Let me imagine that if we got an AI engine that can help me to try the most effective compound indexes to explore Oracle’s SQL optimizer potential solutions for the SQL. The following concatenated indexes are the potential recommendation by the imagined AI engine.

CREATE INDEX C##TOSSKA.TOSSKA_13124445731_V0012 ON C##TOSSKA.EMP_SAL_HIST
(
 SAL_SALARY,
 SAL_EFFECT_DATE,
 SAL_EMP_ID
)
CREATE INDEX C##TOSSKA.TOSSKA_13124445784_V0044 ON C##TOSSKA.EMPLOYEE
(
 EMP_GRADE,
 EMP_DEPT,
 EMP_ID,
 EMP_NAME
)

The following is the query plan after these two concatenated indexes are created and the speed of the SQL is improved to 0.13 seconds. It is almost 18 times better than that of the original SQL without the new indexes.

The above indexes include some columns that appear on the SELECT list of the SQL and there is a correlated indexes relationship for Oracle’s SQL optimizer to generate the query plan, it means that missing any columns of the recommended indexes or reshuffling of the column position of the concatenated indexes may not be able to produce such query plan structure. So, it is difficult for a human expert to compose these two concatenated indexes manually. I am glad to tell you that this kind of AI engine is actually available in the following product.

https://www.tosska.com/tosska-sql-tuning-expert-pro-tse-pro-for-oracle/