A Quick Guide to Stored Procedures in Oracle Database and SQL

sql tuning for MySQL

Stored procedures are increasing in popularity in Oracle Database and SQL Server because of quicker execution. Earlier, application code mostly resided in external programs. However, its shift toward database engine interiors compels database professionals to keep their memory requirements in mind.

This is as necessary as planning for times when the code related to database access will be present within the database. They also need to know how they can handle these stored procedures to maintain ideal database performance. We will look at some of these methods and the advantages of using stored procedures and triggers in the Oracle database.

Perks of Stored Procedures for Oracle Database Performance Tuning

Until recently, a majority of Oracle databases had limited code within their stored procedures. This shift in trends is because of the various advantages that come with placing larger amounts of code, such as the following:

Performance Improvement – Using more stored procedures means you don’t require Oracle database performance tuning as much. That’s because each of these only has to load once into the shared pool. Executing them, therefore, is quicker than running external code.

Code Segregation – The stored procedures have all the SQL codes which turn all the application programs into calls for those procedures. This is an improvement in the data retrieval process because changing databases gets simpler.

Therefore, one advantage you get through stored procedures is the ability to transfer large amounts of SQL code to the data dictionary. Doing this will enable you to perform SQL tuning without involving the application layer.

Group Data Easily – You can gather relational tables with data that shares certain behaviour before looking for Oracle performance tuning tips. Simply use Oracle stored procedures as methods, along with suitable naming conventions. For example, link the behaviour of the table data to the table name in the form of prefixes.

The users may then request the data dictionary to display all the traits connected to one table. This makes it more convenient to recognise and reuse code with the help of stored procedures.

Other Reasons Behind the Increasing Popularity of Stored Procedures

There are plenty of other reasons ‌stored procedures and triggers take less time in comparison with conventional code. One of these has something to do with SGA caching in Oracle database and SQL.

Once the shared pool within the SGA gets a hold of a stored procedure, it keeps it there until the procedure gets paged out from the memory. The SGA mostly does this to create space for other stored procedures. The paging out process takes place based on a Least Recently Used or LRU algorithm.

Two parameters help determine the amount of space that Oracle uses on startup. These are the Cache Size and the Shared Pool Size parameters. They also help users check how much storage space is available for various tasks. These include caching SQL code, data blocks, and stored procedures.

Stored procedures will run extremely fast once you load them into the shared pool’s RAM – as long as you can avoid pool thrashing. This is important because several procedures compete for varying quantities of memory in the shared pool. 

How to index SQL with aggregate function SQL for Oracle?

Here the following is an example SQL shows you that select the maximum emp_address which is not indexed in the EMPLOYEE table with 3 million records, the emp_grade is an indexed column.

select max(emp_address) from employee a
where emp_grade<4000

As 80% of the EMPLOYEE table’s records will be retrieved to examine the maximum emp_address string. The query plan of this SQL shows a Table Access Full on EMPLOYEE table is reasonable.

How many ways to build an index to improve this SQL?
Although it is simple SQL, there are still 3 ways to build an index to improve this SQL, the following are the possible indexes that can be built for the SQL, the first one is a single column index and the 2 and 3 are the composite index with a different order.
1. EMP_ADDRESS
2. EMP_GRADE, EMP_ADDRESS
3. EMP_ADDRESS, EMP_GRADE

Most people may use the EMP_ADDRESS as the first choice to improve this SQL, let’s see what the query plan is if we build a virtual index for the EMP_ADDRESS column in the following, you can see the estimated cost is reduced by almost half, but this query plan is finally not being used after the physical index is built for benchmarking due to actual statistics is collected.

The following query shows the EMP_ADDRESS index is not used and the query plan is the same as the original SQL without any new index built.

Let’s try the second composite index (EMP_GRADE, EMP_ADDRESS), the new query plan shows an Index Fast Full Scan of this index, it is a reasonable plan which no table’s data is needed to retrieve. So, the execution time is reduced from 16.83 seconds to 3.89 seconds.

Let’s test the last composite index (EMP_ADDRESS, EMP_GRADE) that EMP_ADDRESS is placed as the first column in the composite index, it creates a new query plan that shows an extra FIRST ROW operation for the INDEX FULL SCAN (MIN/MAX), it highly reduces the execution time from 16.83 seconds to 0.08 seconds.

So, indexing sometimes is an art that needs you to pay more attention to it, some potential solutions may perform excess your expectation.

The best index solution is now more than 200 times better than the original SQL without index, this kind of index recommendation can be achieved by Tosska SQL Tuning Expert for Oracle automatically.

https://tosska.com/tosska-sql-tuning-expert-pro-tse-pro-for-oracle/

Transferring Data in SQL Server with an Eye on Performance

improve performance of sql query

A lot of database professionals often need to archive older data in SQL Server by transferring it from one table to another. There are multiple ways to achieve the transfer, the most useful of which we will discuss in this blog. We will also provide tips to ensure the performance of the database doesn’t get affected as these approaches are carried out.

Different Methods to Move Data from One Table to Another

Consider the following techniques that various DBAs take when they have to take data from a table to add to another table along with some ways to improve performance of SQL query while using them:

  1. Insert data with the INSERT INTO command – The INSERT INTO query is one of the basic methods of moving data from table 1 to table 2. You can help decrease the time it takes to enter information using this method. If the database is running under the full recovery model, just change it to the bulk-logged model. Doing this saves execution time as it skips over complete logging of bulk operations. The following query should help with this:

ALTER DATABASE <database name> SET RECOVERY <BULK_LOGGED>

Once you switch to the bulk-logged recovery model, you will have to use a truncate statement to flush table 2 (destination). You can carry out the same script you were using to transfer data after this.

  1. Use the SELECT INTO query – Using the SELECT INTO rather than the INSERT INTO command can prove useful in some cases. However, the benefits are significant when the recovery model is bulk-logged due to the reason mentioned above. Although users lack the ability to place the data in an existing table, SQL Server brought with it a feature to make things easier. It essentially enables them to pick the filegroup where they want to create a table.
  1. INSERT INTO query + Tab lock hint – Using both in combination has been known to provide better database performance. To achieve this, you will have to use TABLOCK for table 2. If the destination table is without a clustered index or other constraints, that data will remain as a heap. It helps to use the TABLOCK hint for the destination table during data insertion into a heap using the INSERT INTO statement. Doing this enhances query logging and locking since a shared lock is placed on the whole table rather than every row or page.
  2. Adding data using the SWITCH TO query – You can also try moving the data with the help of the SWITCH TO command. Although this query typically finds its use while transferring information between partitions among separate tables, it can help here as well. How? By moving data from one partition to the next using the ALTER TABLE command. If there are no allocated partitions, the data will transfer through tables instead. Before you begin data insertion, make sure you disable any constraints or indexes that exist on the table. It is better to enable constraints and rebuild indexes after insertion from a performance perspective.

Tips for Enhancing Performance During Data Transfer and Insertion

  • Reduce IO lag – Latency can negatively impact the process of writing database files on disk. You can decrease latency and bottlenecks using SSD drives that are comparatively better than SATA or SCSI drives.
  • Maintain Robust Server Infrastructure – The system needs to be properly built to ensure competent performance for various database operations. The greater the pressure on the resources, the greater the effect on performance.
  • Follow ACID Properties – ACID properties make sure each transaction contains certain properties when it gets processed. In the case of data insertion, the isolation factor is also important to consider because the values have another source. Here, the statements should contain the suitable isolation level to maintain integrity within the database.
  • Database Settings – One of the best ways to achieve improved outcomes is to maintain the right database configuration. This is because the settings can have a significant effect on performance. For instance, the location of the database files on the disk along with TempDB settings.

These are the various ways in which you can gain better performance at the query, trace, and constraint levels along with additions that can improve the execution of insert operations.

How is the order of the columns in a composite index affecting a subquery performance for Oracle?

MySQL database and sql

We know the order of the columns in a composite index will determine the usage of the index or not against a table. A query will use a composite index only if the where clause of the query has at least the leading/left-most columns of the index in it. But, it is far more complicated in correlated subquery situations. Let’s have an example SQL to elaborate the details in the following.

SELECT D.*
FROM   department D
WHERE EXISTS (SELECT    Count(*)
         FROM     employee E
         WHERE     E.emp_id < 1050000
                AND E.emp_dept = D.dpt_id
         GROUP BY  E.emp_dept
         HAVING    Count(*) > 124)

Here the following is the query plan of the SQL, it takes 10 seconds to finish. We can see that the SQL can utilize E.emp_id and E.emp_dept indexes individually.

Let’s see if a new composite index can help to improve the SQL’s performance or not, as a rule of thumb, a higher selectivity column E.emp_id will be set as the first column in a composite index (E.emp_id, E.emp_dept).

The following is the query plan of a new composite index (E.emp_id, E.emp_dept) and the result performance is not good, it takes 11.8 seconds and it is even worse than the original query plan.

If we change the order of the columns in the composite index to (E.emp_dept, E.emp_id), the following query plan is generated and the speed is improved to 0.31 seconds.

The above two query plans are similar, the only difference is the “2” operation. The first composite index with first column E.emp_id uses an INDEX RANGE SCAN of the new composite index, but the second query plan uses an INDEX SKIP SCAN for the first column of E.emp_dept composite index. You can see there is an extra filter operation for E.emp_dept in the Predicate Information of INDEX RANGE SCAN of the index (E.emp_id, E.emp_dept). But the (E.emp_dept, E.emp_id) composite index use INDEX SKIP SCAN without extra operation to filter the E.emp_dept again.

So, you have to test the order of composite index very carefully for correlated subqueries, sometimes it will give you improvements that exceed your expectation.

This kind of index recommendation can be achieved by Tosska SQL Tuning Expert for Oracle automatically.

https://tosska.com/tosska-sql-tuning-expert-pro-tse-pro-for-oracle/

How to use ORDERED Hint to Tune a SQL with subquery for Oracle?

Here the following is the description of the ORDERED hint.

The ORDERED hint causes Oracle to join tables in the order in which they appear in the FROM clause.

If you omit the ORDERED hint from a SQL statement performing a join, then the optimizer chooses the order in which to join the tables. You might want to use the ORDERED hint to specify a join order if you know something about the number of rows selected from each table that the optimizer does not. Such information lets you choose an inner and outer table better than the optimizer could.

We usually use an ORDERED hint to control the john order, but how this hint causes a SQL with a subquery. Let’s use the following SQL as an example to see how ORDERED hint works for a subquery.

SELECT *
     FROM DEPARTMENT
where  dpt_id
     in (select emp_dept from employee
      where emp_id >3300000)

Here the following is the query plan of the SQL, it takes 68.84 seconds to finish. The query shows a “TABLE ACCESS FULL” of the DEPARTMENT table and “NESTED LOOPS SEMI” to an “INDEX RANGE SCAN” of EMPLOYEE.

If you think it is not an effective plan, you may want to try to reorder the join path and see if an ORDERED hint is working or not in a subquery case like this:

SELECT  /*+ ORDERED */ *
FROM  department
WHERE  dpt_id IN (SELECT  emp_dept
         FROM  employee
         WHERE  emp_id > 3300000)

Here is the query plan of the hinted SQL and the speed is 3.44 seconds which is 20 times better than the original SQL. The new query plan shows the new join order that EMPLOYEE is retrieve first and then hash join DEPARTMENT later. You can see the ORDERED hint will order the subquery’s table first. This new order clauses a new data retrieval method from the EMPLOYEE table, it makes the overall performance much better than the original query plan.

This kind of rewrite can be achieved by Tosska SQL Tuning Expert for Oracle automatically, there are other hints-injection SQL with better performance, but it is not suitable to discuss in this short article, maybe I can discuss later in my blog.

https://tosska.com/tosska-sql-tuning-expert-pro-tse-pro-for-oracle/

How to tune a SQL that cannot be tuned ?

oracle sql performance tuning

Some mission-critical SQL statements are already reached their maximum speed within the current indexes configuration.  It means that those SQL statements are not able to be improved by syntax rewrite or Hints injection. Most people may think that the only way to improve this kind of SQL may be by upgrading hardware.  For example, the following SQL statement has every column in WHERE clause is indexed and the best query plan is generated by Oracle already. There is no syntax rewrite or hints injection that can help Oracle to improve the SQL performance.

SELECT EMP_ID,
    EMP_NAME,
    SAL_EFFECT_DATE,
    SAL_SALARY
  FROM EMPLOYEE,
    EMP_SAL_HIST,
    DEPARTMENT,
    GRADE
WHERE EMP_ID = SAL_EMP_ID
  AND SAL_SALARY <200000
  AND EMP_DEPT = DPT_ID
  AND EMP_GRADE = GRD_ID 
  AND GRD_ID<1200    AND EMP_DEPT<‘D’

Here the following is the query plan and execution statistics of the SQL, it takes 2.33 seconds to extract all 502 records. It is not acceptable for a mission-critical SQL that is executed thousands of times in an hour. Do we have another choice if we don’t want to buy extra hardware to improve this SQL?

Introduce new plans for Oracle’s SQL optimizer to consider
Although all columns in the WHERE clause are indexed, can we build some compound indexes to help Oracle’s SQL optimizer to generate new query plans which may perform better than the original plan? Let’s see if we adopt the common practice that the following EMPLOYEE’s columns in red color can be used to compose a concatenated index (EMP_ID, EMP_DEPT, EMP_GRADE).

WHERE  EMP_ID = SAL_EMP_ID
  AND  SAL_SALARY <200000
  AND  EMP_DEPT = DPT_ID
  AND  EMP_GRADE = GRD_ID 
  AND  GRD_ID<1200
  AND  EMP_DEPT<‘D’

CREATE INDEX C##TOSSKA.TOSSKA_09145226686_V0043 ON C##TOSSKA.EMPLOYEE
(
 EMP_ID,
 EMP_DEPT,
 EMP_GRADE
)

The following is the query plan after the concatenated index is created. Unfortunately, the speed of the SQL is 2.40 seconds although a new query plan is introduced by Oracle’s SQL optimizer.

To be honest, it is difficult if we just rely on common practices or human knowledge to build indexes to improve this SQL. Let me imagine that if we got an AI engine that can help me to try the most effective compound indexes to explore Oracle’s SQL optimizer potential solutions for the SQL. The following concatenated indexes are the potential recommendation by the imagined AI engine.

CREATE INDEX C##TOSSKA.TOSSKA_13124445731_V0012 ON C##TOSSKA.EMP_SAL_HIST
(
 SAL_SALARY,
 SAL_EFFECT_DATE,
 SAL_EMP_ID
)
CREATE INDEX C##TOSSKA.TOSSKA_13124445784_V0044 ON C##TOSSKA.EMPLOYEE
(
 EMP_GRADE,
 EMP_DEPT,
 EMP_ID,
 EMP_NAME
)

The following is the query plan after these two concatenated indexes are created and the speed of the SQL is improved to 0.13 seconds. It is almost 18 times better than that of the original SQL without the new indexes.

The above indexes include some columns that appear on the SELECT list of the SQL and there is a correlated indexes relationship for Oracle’s SQL optimizer to generate the query plan, it means that missing any columns of the recommended indexes or reshuffling of the column position of the concatenated indexes may not be able to produce such query plan structure. So, it is difficult for a human expert to compose these two concatenated indexes manually. I am glad to tell you that this kind of AI engine is actually available in the following product.

https://www.tosska.com/tosska-sql-tuning-expert-pro-tse-pro-for-oracle/

Importance of Backup & Recovery in MySQL Database and SQL

MySQL database and SQL

A database is the cornerstone of any application. For this reason, maintaining one or more backup and recovery options remains a priority for every database professional. There are multiple alternatives you can choose from as per the specific needs of your organization’s database.

In this post, we will examine some of the most popular back-ups and restore strategies for MySQL database and SQL. We will also touch upon the reasons why databases require backups on a regular basis.

Why Do We Need Backups for MySQL Database?

As a DBA, you’ll need backup and recovery to support data in multiple cases, such as:

Discrepancies in Data: Users may accidentally delete or update incorrect data in the primary or replica node.

Data Centre Failure: An indefinite power outage or internet connectivity issue can spell trouble for your organization.

Disk Damage: If the disk is stalling for too long due to some kind of damage, it can greatly reduce performance. In cloud services for Oracle database, it translates into a broken DB instance that cuts access.

Broken Data: In case of a power outage, MySQL may fails to write data and close files as usual. There are also instances where MySQL fails to restart and doesn’t work despite the crash recovery process because of corruption in data.

Legislation/Regulation: Backups and recovery options ensure business value and client satisfaction.

Various Kinds of Backups for MySQL Database

Given below are some common backup categories that suit a range of needs:

Physical: These comprise the exact copies of database files and may contain part or all of the MySQL directory. The most common use of this type of backup is to make a new replica node and respond to host failure in a convenient manner. Experts recommend restoring data with the help of the same MySQL database version.

Offsite: This is one of the most recommended backup alternatives as it guarantees an untouched copy in case of data centre or host failure. It involves copying the data to the cloud, an external file server or another external source. However, sometimes it may take longer to download the files from the cloud or server than the recovery process. Therefore, experienced database professionals keep about a week of data locally on a backup server for quick recovery.

Logical: It is useful for smaller quantities of data as it is slower in comparison to physical backup methods. It essentially consists of dumps from INSERT and CREATE statements. It is useful in addressing data corruption or when you must recover a subset of tables. Although the output is greater in logical backups, especially when that data is present in text format, you can perform a quick compression if the software you’re using requires it. For instance, you can use Mydumper and mysqldump to compress and redirect the data to the zip folder.

Incremental: This type of backup contains all the changes made in the organization’s Oracle database and SQL server since the last backup. It is, therefore, quite useful for enormous datasets since it allows you to take small backups (experts recommend this after you’ve taken a full backup) as data comes later.

Differential: It consists of copying the modifications since your previous backup. One advantage of a differential backup is that it saves disk space. This is because the data in these backups mostly remains the same, so the result leads to backups that are substantially smaller in size.

Oracle Database and SQL: The Pros & Cons of Bind Variables

SQL Server database and SQL

Bind Variables are typically considered as one of the major aspects of enhanced SQL query performance. According to Oracle documentation, they serve as a placeholder in a SQL query, getting replaced by a particular value that helps in statement execution.

The use of these variables enables users to create statements that can receive time-running parameters or inputs. One may think of bind variables as “value” given to the SQL query that acts as any function in programming languages. Here, we will talk more about them as well as their advantages and disadvantages in Oracle database and SQL.

Bind Variables: Examples of Their Uses

Consider the following statements in SQL –

Select * from Staff where S_No = 1 ;

Select * from Staff where S_No = :a ;

In the first query, a proper value (1) is applied to operate the query, whereas, in the next query, we have used a bind variable (:a) to operate the statement. This bind variable is given to Oracle when the query is run.

Defining a bind variable in the SQL statements in the place of literal values ensures that a single Parent Cursor is utilized by Oracle for the query. This helps improve database performance because Oracle searches for precise text matches for the query to check whether it already exists in the shared pool. Using a bind variable rather than a literal value saves an expensive hard parse for each run of that query.

Bind variables prove especially useful in OLTP-type environments because their use facilitates soft parsing. In other words, it takes less processing time to re-generate execution plan.

How Bind Variables Help Improve Database Performance

Given below are some advantages of using bind variables:

  1. Optimal Use of Shared Pool – The Shared Pool in Oracle Database needs to hold a single query instead of possibly numerous queries, thanks to bind variables.
  2. Improved Performance Due to Zero Hard Parsing – There is no need for hard parsing because SQL queries only diverge in terms of values.
  3. Decreased “library cache” Latch Contention – Since library cache latch contention is needed during a hard parse, its requirement reduces when bind variables are used.

Shortcomings of Bind Variables

The disadvantages of using bind variables are few. Although bind variables prove excellent if you want to improve Oracle database performance, there are instances where their use can negatively impact results:

  • They can decrease the flow of information needed to compute the best access path for CBO (Cost Based Optimizer). The CBO, in turn, may fail to identify the correct selectivity and create insufficient bad execution plans, opting for a complete table scan instead of using indexes.
  • Sometimes, the CBO requires the literal value to be used by SQL in order to build a robust execution plan. With bind variables, the literal value gets “hidden”, so the CBO is likely to create a subpar plan.

To overcome this issue, Oracle has tried to provide further assistance to the CBO by enabling it to take a look at the bind variable’s value during execution plan creation, which is known as “Bind Variable Peeking”.

Conclusion

The use of bind variables is extremely useful in Oracle database performance, especially when it comes to OLTP environments. However, as a user, you need to be careful while using bind variables. . It is recommended to use bind variables for short runtime SQL, but use literals for long time SQL statements to more information to CBO to generate good query plans.

How to Tune SQL Statement with OR conditions in a Subquery for SQL Server?

sql performance monitoring

The following is an example that shows a SQL statement with an EXISTS subquery. The SQL counts the records from the EMPLOYEE table if the OR conditions are satisfied in the subquery of the DEPARTMENT table.

select countn(*) from employee a where
exists (select ‘x’ from department b
    where a.emp_id=b.dpt_manager or a.emp_salary=b.dpt_avg_salary
     )

Here the following is the query plan in the Tosska proprietary tree format, it takes 4 minutes and 29 seconds to finish.

The query plan shows a Nested Loops from EMPLOYEE to full table scan DEPARTMENT, it is the main problem of the entire query plan, the reason is the SQL Server cannot resolve this OR conditions  ”a.emp_id=b.dpt_manager or a.emp_salary=b.dpt_avg_salary” by other join operations.

Let me rewrite the OR conditions in the subquery into a UNION ALL subquery in the following, the first part of the UNION ALL in the subquery represents the “a.emp_id=b.dpt_manager” condition, the second part represents the “a.emp_salary=b.dpt_avg_salary” condition but exclude the data that already satisfied with the first condition.

select  count(*)
from   employee a
where  exists ( select  ‘x’
        from   department b
        where  a.emp_id = b.dpt_manager
        union all
        select  ‘x’
        from   department b
        where  ( not ( a.emp_id = b.dpt_manager )
            or b.dpt_manager is null )
            and a.emp_salary = b.dpt_avg_salary )

Here the following is the query plan of the rewritten SQL, it looks a little bit complex, but the performance is very good now, it takes only 0.447 seconds. There are two Hash Match joins that are used to replace the original Nested Loops from EMPLOYEE to full table scan DEPARTMENT.

Although the steps to the final rewrite is a little bit complicated, this kind of rewrites can be achieved by Tosska SQL Tuning Expert for SQL Server automatically, it shows that the rewrite is more than 600 times fastAlthough the steps to the final rewrite is a little bit complicated, this kind of rewrites can be achieved by Tosska SQL Tuning Expert for SQL Server automatically, it shows that the rewrite is more than 600 times faster than the original SQL.

Tosska SQL Tuning Expert (TSES™) for SQL Server® – Tosska Technologies Limited

How to Tune SQL Statements to Run SLOWER… but Make Users Feel BETTER (Oracle)?

MySQL database and SQL

Your end-users may keep on complaining about some functions of their database application are running slow, but you may found that those SQL statements are already reached their maximum speed in the current Oracle and hardware configuration. There may be no way to improve the SQL unless you are willing to upgrade your hardware. To make your users feel better, sometimes, you don’t have to tune your SQL to run faster but to tune your SQL to run slower for certain application’s SQL statements.

This is an example SQL that is used to display the information from tables Emp_sal_hist and Employee if they are satisfied with certain criteria. This SQL is executed as an online query and users have to wait for at least 5 seconds before any data will be shown on screen after the mouse click.

select * from employee a,emp_sal_hist c
where a.emp_name like ‘A%’
     and a.emp_id=c.sal_emp_id
     and c.sal_salary<1800000
order by c.sal_emp_id

Here the following is the query plan and execution statistics of the SQL, it takes 10.41 seconds to extract all 79374 records and the first records return time ”Response Time” is 5.72 seconds. The query shows a MERGE JOIN of EMPLOYEE and EMP_SAL_HIST table, there are two sorting operations of the corresponding tables before it is being merged into the final result. It is the reason that users have to wait at least 5 seconds before they can see anything shows on the screen.

As the condition “a.emp_id = c.sal_emp_id”, we know that “ORDER BY c.sal_emp_id“ is the same as “ORDER BY a.emp_id“,  as SQL syntax rewrite cannot force a specified operation in the query plan for this SQL, I added an optimizer hint /*+ INDEX(@SEL$1 A EMPLOYEE_PK) */ to reduce the sorting time of order by a.emp_id.

SELECT  /*+ INDEX(@SEL$1 A EMPLOYEE_PK) */ *
FROM    employee a,
      emp_sal_hist c
WHERE a.emp_name LIKE ‘A%’
    AND a.emp_id=c.sal_emp_id
    AND c.sal_salary<1800000
ORDER BY c.sal_emp_id

Although the overall Elapsed Time is 3 seconds higher in the new query plan, the response time is now reduced from 5.72 seconds to 1.16 seconds, so the users can see the first page of information on the screen more promptly and I believe most users don’t care whether there are 3 more seconds for all 79374 records to be returned. That is why SQL tuning is an art rather than science when you are going to manage your users’ expectations.

This kind of rewrite can be achieved by Tosska SQL Tuning Expert for Oracle automatically.

https://tosska.com/tosska-sql-tuning-expert-pro-tse-pro-for-oracle/