Month: December 2013

Statistics gathering issue

This issue came up few weeks ago. I was asked to look into a time consuming batch job and the job was running for 5-6 hours every day and client wanted to reduce the execution time. This is what the job does every day

  1. Drop the indexes
  2. Load data into the tables
  3. Recreate the indexes
  4. Gather statistics

This looked fine to me at first. The stats procedure was running for a few hours and the client was getting worried on the execution time. Once I tuned the queries in loading procedure I decided to work on the stats gathering script.

Here is a copy of stats gathering script I got. I could see a lot of issues with the way it is written and executing.

stats_before

Some of the issues are

  1. The procedure collects the stats of all the tables irrespective of whether it is really changed or not.
  2. It does the same with all indexes
  3. Here index stats are collected twice; once when the indexes are created and secondly by the procedure.
  4. Parallelism is not used.

Here is the modified script and know what it got completed a lot faster!!

stats_after

Advertisements

Oracle Cardinality hint

Recently I came across a query where the cost was comparatively high. I was wondering what could be the issue for a simple query to have such a high cost. This query has been called many times during the execution and this has considerably increased over all execution time of the process. The query was placed inside a procedure which has a collection as input parameter. This collection is used to filter data from a table with 544K rows.

The business logic was such that we can pass only 5 members to the collection at a time. I was surprised to see oracle does a FTS on the table to fetch 5 rows from an indexed column. I thought of digging a little bit deeper and find the root cause.

Original Query

pic1

Plan

pic2

Here, oracle calculates the size of collection as 8168 but we have only 5 elements in the collection. I thought of adding the cardinality hint and provide the optimizer some more information.

Modified Query with cardinality hint.

pic3

Plan

pic4

We can clearly see that oracle is calculating the collection size properly and able to generate the best plan. we were able to get a great performance difference with this small changes.

How oracle plan baselines saved our day!!

We had an interesting performance issue in our project yesterday. Our senior performance DBA Joe was able to fix it very quickly. I will explain what the issue was and how Joe analyzed it step by step and resolved it using Oracle plan baselines.

It all happened when we noticed one of our batch job is using too much of CPU and time, a quick OEM analysis proved that we had considerably high resource usage while the batch job was running. Most of the resource was consumed by a SQL query

pic1

The plan control told us that we have two execution plans and oracle is picking the cheaper plan.

Oracle used the plan with least cost but it was using the wrong index to scan the tables.  And once he found the problems it is about devising the solutions.

He had 3 options

  1. Ask developers to rewrite the code and deploy it.
  2. Add a hint and direct oracle to use the correct index
  3. Use plan baseline to make oracle use the right plan

The first two options were ruled out because and code change has to follow a deployment lifecycle and might get weeks to hit production. So he immediately chose the third option.

pic2

So he just asked oracle to choose the second plan using the SQL plan baseline, and Bingo!!  The query started running smoothly.

You can read more about the plan baselines below

http://docs.oracle.com/cd/B28359_01/server.111/b28274/optplanmgmt.htm#BABEAFGG

Second day..

I’m learning this stuff very slowly. It’s the time to make my blog look good. Adding a few images and background, I’m linking all my favorite blogs with mine. So if you are bored with my rants you can read some good stuff from masters. !!

My first blog..

Well, this my first blog. I have been thinking about blogging for some time and finally decided to give it a try. I consider this more of a journal than a blog.  I’m a database performance engineer and i face many performance issues in my work, i want to chronicle all of them for my future reference. I will be really happy if my blog helps some one. After all “Knowledge grows when shared.”