Sunday, 13 December 2015

Dude, Where's My Datafile?

disclaimer: here

ASM can be too transparent at times.

I found that out when I was investigating read errors (ORA-19501 & ORA-15081 - which caused RMAN backup to fail) on a datafile.

For a reason that will be explained below, it was necessary to find out which ASM disk the datafile resides on, but you can't retrieve that information from V$ASM_ views, because they do not reveal the association between datafiles (i.e. the ASM Aliases) and the ASM disk itself.

Environment: "External" redundancy across ASM diskgroup, which is shared between six RAC DBs. GI and Databse are both, but the solutions applies to as well.

unrecoverable error ORA-15188 raised in ASM I/O path

Considering the chances of logical corruption, and using the block number reported in ORA-19501, it is possible to find the affected segment (see Stefan Knecht's article) and take an action to prevent the same error from happening again during RMAN backup; such as deleting the segment.

But after deleting the segment, although I could run the backup successfully, validation of the datafile in RMAN failed, citing the exact same errors, on the exact same block.

This is due to the difference in the mechanism of backup compared to validation's: "When validating whole files, RMAN checks every block of the input file" while RMAN backup touches blocks that contain data, and skips the blocks that do not contain data.

Incidentally checking the v$ASM_disk.READ_ERRS revealed that one of the ASM disks has experienced read errors. Interestingly the errors could also be traced in ASM instance's alert.log, as well, citing information about the location of the issue:

WARNING: Read Failed. group:2 disk:2 AU:235530 offset:786432 size:131072
ERROR: unrecoverable error ORA-15188 raised in ASM I/O path; terminating process 28997

Enter The Headless Chicken!

That's me: totally ignoring the fact that although I don't know the solution, I SHOULD know what's NOT the solution!

Indications of the issue in ASM instance's alert.log (warnings similar to the one in the previous lines) dated back to the last time an ASM disk was added to the diskgroup, and rebalancing was triggered.

Rebalancing had made ASM go through a series of continuous start/failure/stop of ARB0 background process: The initial attempt for rebalancing (after adding a new ASM disk to the diskgroup) triggered the start of ARB0, failed with ORA-15118, which lead to ARB0 stopping, and restarting the whole process over and over again.

Yet, the headless chicken decided to drop the ASM disk from the diskgroup to rectify the problem!

Eventually the futility of the process dawned on me and I executed the following commands:

alter diskgroup DATA_01 undrop disks;
alter diskgroup DATA_01 rebalance power 0;

The solution

The key lies in the fact that both RMAN validation and ASM rebalacing deal with the same thing: the whole datafile.

In this case, the ORA-15118s were an indication that ASM was unable to access some non-data-holding parts of the datafile on the storage, so the solution was shrinking the datafile to a size as close as possible to the volume of the data; because RMAN backup is carried out with no errors.

But how can we make sure that datafiles of other DBs which are stored on the same diskgroup, are not affected? First we need to find out which datafiles from which DBs have bits and pieces on the same ASM disk, and then validate each one using RMAN.

Which gets us to the main point of this post (finally!): the missing logical link between data files and ASM disks can be found in X$KFFXP fixed table.

I found out about it from "Secrets of Oracle Database" (Chapter 12, Norbert Debes, Apress, 2009) but there are other sources which discuss the fixed tables in ASM instance, such as "Pro Oracle Database 10g RAC on Linux" and Luca Canali's article here.

The following query can be used for this purpose:

select *
from v$asm_alias
  select distinct NUMBER_KFFXP, INCARN_KFFXP
  from x$kffxp
  where DISK_KFFXP = disk_number according to alert.log
  and LXN_KFFXP = 0
  and GROUP_KFFXP= group_number according to alert.log

The join columns reveal the purpose of NUMBER_KFFXP, and INCARN_KFFXP columns, GROUP_KFFXP and DISK_KFFXP are equal to v$asm_disk.GROUP_NUMBER and v$asm_disk.DISK_NUMBER, and LXN_KFFXP is the Logical Extent Number (0 = primary, 1 = mirrored copy).

Note for the proponents of set theory: Yes, I know that this query doesn't manifest an understanding of your beloved theory (and mine too, believe me!) but I found this syntax fast enough without requiring any hints to make sure the right join method is used.


The query revealed all the files (from different DBs) that were located on that specific ASM disk. Fortunately all of them were validated successfully by RMAN.

Eventually the segments of the problematic datafile were moved to a new tablespace, the data file was shrunk with no issues (and validated successfully by RMAN), and afterwards I was able to drop the ASM disk from the group.

Apparently some people have been living happily ever after.

p.s. I like to thank Martin Widlake and Neil Chandler for offering me advice for this post. However, I'm responsible for statements made here, so please do let me know if you notice any errors.

Sunday, 17 May 2015

Nobody reads your blog!

Yes, nobody!

I mean: REALLY read it; going through all the words and sentences and paragraphs and get through to the end.

During the time that I was working for BBC World Service (2002-2012) one of the more important guidelines for online content producers was: "All you have to play with is the first page, and in fact, first paragraph of your piece. You have to grab your audience's attention there, and say whatever you want to say in the top part of the page, before the user needs to scroll down, because they rarely do!"

There has been plenty of evidence supporting this theory (see likes of which in here, here and here) and as far as I know, this is still what's being preached in BBC, and many prominent international media organizations.

So what should we do?

Your blog post is a product

Your post should be a matchbox:
simple, small, to the point

And considering the evidence (read their conclusion as: "global short attention-span syndrome") you need to come up with a product like a matchbox, rather than Swiss army knife. Simple, Small, to the point.

As a product you need to think about two main aspects:

a. Content (the idea itself, what is it that your "product" supposed to do)
b. Form (packaging, how can you make sure that the "product" is bought, and also used)

a. Content

You are a technical blogger (an assumed audience of this post) and have something to write about. I believe you need to answer these questions, before putting pen on paper (or fingertip on keyboard... whatever)

1. What is the purpose of your post? Are you trying to spread the knowledge? educating people? or just blowing off steam about some bug you've come across? Do you actually want your post to be read by as many people as possible? or you're happy as long as one or two like-minded people read it?

1.5. Consistency always raises your profile. Are you happy with being a part-time educator, part-time bug nagger? Or you actually prefer to be known by only one of them?

2. Have you thought about your audience? Who are they? Are they "Oracle Ace directors?" Are they supposed to be technically as able as you are? How you considered the fact that even among Oracle Ace Directors people have different abilities? Not everybody is an "Optimizer guru," "Recovery master," or "Replication genius", and they all might still be Oracle Ace Directors. Do you care if someone reads your post and doesn't understand it?

3. Are you true to yourself regarding your own -possible- attempt at a shot at fame? If you want to be known as THE educator, or THE expert in a subject, it needs a lot of effort, not only to be that person, but also to make people notice you (i.e. how to package/sell your "product"). It needs commitment to presentation/marketing fundamentals, and awareness of the market characteristics and trends.

I found Tim Hall's advice very insightful on many other matters concerning management of this creative process.

b. Form

You can find plenty of style-guides on the internet (here and here, for example). They seem to have a number of key elements in common:
A Bad blog post!
  1. Choose intriguing yet short titles.

    Most of the blog posts are advertised via Twitter. So more compact title provides you with more room for hashtags to advertise your Tweet (and therefore your blog post) properly.

    Do consider concepts such as "impact" and "controversy" when you come up with a title. Avoid questions in titles; that's generally frowned upon in the world of "proper" journalism.

  2. Start with an abstract/introduction.

    Don't waste your readers' time by not stating the aim of your post, because if you make them go through it and they find out that it's not what they were looking for, they will hesitate to come back to your blog.

    Also people usually don't scroll down! If you don't present your idea in an exciting manner in the first two or three sentences, they'll simply leave your post!

  3. ALWAYS write short sentences, and short paragraphs.

    BBC World Service's guideline was "one sentence is enough in a paragraph," and if you read news on BBC's website, you can see how it looks.

    Nobody likes going through tens of uninterrupted lines of writing; especially on technical blogs which are usually wider than normal to avoid line-wraps when including programming code (or "execution plans" among Oracle bloggers).

  4. Break your post into little pieces.

    I advise using sub-headers religiously. It can make the scanning eyes of your readers stop, and pay attention to what you want them to pay attention to.

    Also think about the flow of your post. If you can use "ordered lists" as your sub-headers, do it.

  5. Use any tool that's available at your disposal to make your post less boring: images, links, unordered lists, Alex Gorbatchev's SyntaxHighlighter.

    You might think that your audience are too sophisticated for such "marketing tricks," but unfortunately we all (at least the more fortunate ones) can only read with our eyes. If the eyes don't like something they see, they try not to see it!

  6. A "Conclusion" part is as important as the "abstract/introduction."

  7. Audience is ALWAYS thinking: "I don't care who you think you are; I decide who you are!"

    So don't think they'll keep reading your posts no matter how long it is. They simply won't. Keep it short! (Unlike what I've done here, in my post!)

  8. Remove distractions from your blog post, as much as possible.

    So you are an OCA and an OCP and an OCM and an Oracle Certified Expert on something and you get tons of traffic that tens of companies want to advertise through your website? Sure, but every time someone comes into your page and the content of your post isn't the first thing that their eyes meet, you've effectively lost a reader.

  9. Making your blog Mobile-Aware increases your exposure
    Make your post as accessible/visible as you can.

    Use meta tags or anything else that can help your post appear in search engines when looking for a related word/phrase.

    Also I need to echo Julian Dyke's advice (thanks to Martin Widlake for reminding me of it): For better exposure you need to make your blog mobile-aware. Google announced on 26th February, 2015  how they're changing their search engine's criteria to give higher rankings to Mobile-aware blogs and sites. Also guidelines to test, and the definition of "mobile-awareness" is included in the statement.


Technical bloggers seem to read blogs in two circumstances, generally:
  1. When they're looking for a solution to their own problem (i.e. when they read a blog post they keep saying in their head: "cut the bullshit!")
  2. At night, after they're finished with day to day stuff. (i.e. their eyes and brains are too tired to spend a lot of energy to go through your "jewel of a post" line by line, and word by word.)
If you don't engage with them as soon as they start reading the title of your post, and don't make the whole experience of reading your post as comforting (and informative) at the same time, You have effectively wasted your time!

Thursday, 14 May 2015

How to keep track of execution in multi-file/part Oracle scripts

disclaimer: here

Installation scripts can be a pain.

With multiple anonymous PL/SQL blocks here and there, and scripts calling other scripts and bind variables being initiated through running some other scripts and.... You get the image.

So if -God forbid- we want to check the result of an activity which might have happened in the earlier parts of this "spaghetti of codes," we need to kill a bunch of those precious gray cells and find a way!

Must of the time decisions can be based on "footprints in the data;" e.g. something is supposed to happen later in the script based on whether we have deleted a row or not. But If the script is supposed to be re-runnable (such as an installation/upgrade/patch script) how can you judge whether THAT rows has been dropped in this run, or a previous one?

Well, there is a solution after all.

Conditional Compilation Flags to the rescue!

As long as your script is running on an 10gR2 or later released instance, and your whole script runs in one session, you can use the session variables, aka Conditional Compilation Flags.

Their biggest advantage is that unlike the bind variables (set by DEFINE command), you can set/unset your session variables inside OR outside a PL/SQL block.


ALTER SESSION SET PLSQL_CCFLAGS = 'flagName:value[,flag2Name:value2, ...]';

or inside a block:

EXECUTE IMMEDIATE 'ALTER SESSION SET PLSQL_CCFLAGS = ''flagName:value[,flag2Name:value2, ...]''';

Example Scenario

Imagine the following "install.sql" script (positioned in the same directory as "one.sql" and "two.sql"



This sample script has two parts. The first anonymous block simulates an activity that whose result affect the operation (i.e. code-path) in the second script in the same file, and consequently the operation in a second file.

The first script populates a flag named "activityResult" (Note: Conditional Compilation Flags are case-sensitive) with either 0 or 1, depending on the results of the simulated operation.

The second script shows how can you check the value of the flag and create alternative code-paths.

$IF ... $THEN $ELSIF $ELSE $END is THE construct when it comes to Conditional Compilation.

The flags themselves are referred to as $$flagName.

        v_RandomActivityResult  NUMBER := CEIL(DBMS_RANDOM.VALUE(0,6));
        dbms_output.put_line('script one - first block');
        dbms_output.put_line('Random Activity Result:' || to_char(v_RandomActivityResult));

        IF v_RandomActivityResult < 3 THEN
                -- Failure
                EXECUTE IMMEDIATE 'alter session set PLSQL_CCFLAGS = ''activityResult:0''';
                -- Success
                EXECUTE IMMEDIATE 'alter session set PLSQL_CCFLAGS = ''activityResult:1''';
        END IF;

        dbms_output.put_line('script one - second block');
$IF $$activityResult = 1 $THEN
        dbms_output.put_line('The activity in the first block was successful!');
        dbms_output.put_line('Carrying out related activity...');
        EXECUTE IMMEDIATE 'alter session set PLSQL_CCFLAGS = ''relatedActivity:1''';
        dbms_output.put_line('The activity in the first block failed!');
        EXECUTE IMMEDIATE 'alter session set PLSQL_CCFLAGS = ''relatedActivity:0''';


The "one.sql" showed that it is possible to access a session flag inside an anonymous block which hasn't populated/initiated the flag. The following script simply provides an example to clarify what does "session" mean!

        dbms_output.put_line('script two');
$IF $$relatedActivity = 1 $THEN
        dbms_output.put_line('Related activity was carried out.');
        dbms_output.put_line('Related activity was not carried out.');

A sample output:

"two.sql" is merely another file which (as can be seen above) is run as part of "install.sql" so it's still in the same session, and can access the "relatedActivity" flag that was initiated in the second block inside "one.sql."

$ sqlplus /nolog @install

SQL*Plus: Release Production on Thu May 14 14:20:00 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Session altered.

script one - first block
Random Activity Result:1

PL/SQL procedure successfully completed.

script one - second block
The activity in the first block failed!

PL/SQL procedure successfully completed.

script two
Related activity was not carried out.

PL/SQL procedure successfully completed.

Disconnected from Oracle Database 11g Enterprise Edition Release - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options


This post was an extremely simple implementation of what's being offered by Oracle through the concept of Conditional Compilation, and the directives are not limited to what were used in this post.

You can find Oracle's reference on the subject here.

Saturday, 2 May 2015

DBMS_XPLAN can lie!

disclaimer: here

"Don't trust what you see!"

A cliche in spy movies, this sentence can also have a less impressive interpretation: "Don't be over-excited by any finding!"

I came to believe in this interpretation when I was studying for Oracle SQL Tuning Expert exam.

Going through experiments on using different DOP for INSERT...SELECT statement (a more comprehensive post coming up soon), I came across something which looked like a behavior change between and and -more importantly- seemed to contradict the documentation.

All I heard in my brain was this: "Shit! This is going to put me on the map! OakTable! Here I come!"

I got in touch with Martin Widlake, who was kind enough to go through the results of my experiment, and advised me to publish the results -which I'm extremely grateful for it.


Just for clarification, I'd like to rewrite the rule for calculation of DOP in INSERT..SELECT as below:

After the decision to parallelize the SELECT or INSERT operation is made, one parallel directive is picked for deciding the DOP of the whole statement, using the following precedence rule:

1.       Insert hint directive
2.       then Session
3.       then Parallel declaration specification of the inserting table
4.       and then Maximum query directive.

Oracle states: "In this context, maximum query directive means that among multiple tables and indexes, the table or index that has the maximum DOP determines the parallelism for the query operation."

(You can find 11gR2's original documentation here and's here.)

But the results of the experiment below in appears to suggest that rule 4 overrides the rule 3, in this version.







The experiment


As you can see below, there are no hints on the statement, and Parallel DML is merely enabled on he session. We are doing a simple direct-path insert into a table with parallel declaration of 3, and selecting from a table which has a parallel declaration of 4. So based on Oracle documentation we expect the DOP to be 3 (i.e. "Parallel declaration specification of the inserting table") despite the higher DOP on the source table.


SQL_ID  cwrqam2gbx7ds, child number 0                                                         
INSERT /*+ append TEST3 */ INTO DESTINATION_DATA SELECT * FROM                                
Plan hash value: 2115782672                                                                   
| Id  | Operation                          | Name        | Rows  |    TQ  |IN-OUT| PQ Distrib |
|   0 | INSERT STATEMENT                   |             |       |        |      |            |
|   1 |  PX COORDINATOR                    |             |       |        |      |            |
|   2 |   PX SEND QC (RANDOM)              | :TQ10000    |  1000K|  Q1,00 | P->S | QC (RAND)  |
|   3 |    LOAD AS SELECT (HYBRID TSM/HWMB)|             |       |  Q1,00 | PCWP |            |
|   4 |     OPTIMIZER STATISTICS GATHERING |             |  1000K|  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR             |             |  1000K|  Q1,00 | PCWC |            |
|*  6 |       TABLE ACCESS FULL            | SOURCE_DATA |  1000K|  Q1,00 | PCWP |            |
Predicate Information (identified by operation id):                                           
   6 - access(:Z>=:Z AND :Z<=:Z)                                                              
   - Degree of Parallelism is 4 because of table property                                     
 28 rows selected

WHAT?! DOP is 4?!

Not so fast though ... #sigh

During the comparison of the behavior between 11g and 12c, I kept checking the V$SQL.PX_SERVERS_EXECUTIONS, mainly because the DBMS_XPLAN.DISPLAY_CURSOR was not giving any hints on the employed DOP.

I hadn't done it for because the output seemed to have enough information in that regard. But Martin Widlake suggested I do the same, to be consistent.

FROM v$sql
WHERE sql_text LIKE '%TEST3%';

------------- ------------ ---------------------
cwrqam2gbx7ds            0                     3

There you go! Oracle's documentation was not wrong after all. The DOP is in fact equal to the parallel declaration on the inserting table (i.e. 3).

One last blow came from OEM. You can clearly see the parallel servers involved in the INSERT...SELECT.

Just a note that seeing only one set of parallel servers proves Oracle's statement that when the decision is made to do both INSERT & SELECT in parallel, ONE parallel directive is used to carry out the operation; i.e. both operation use the same DOP.


Don't be deceived by the glamor of the extra information which is presented in DBMS_XPLAN can lie.

Here I have to mention that I had also sought Jonathan Lewis' opinion about the post.

I'd like to quote this from Jonathan's response, when I got back to him to say that I was wrong:

 "There's always one more thing to check."


Dear Reader!

I hereby declare that the material published in this blog have no other purpose, but to remind fellow colleagues about some aspects of Oracle Database's behavior that 1st) I have come across, and 2nd) I've felt that they're not some of the commonly known aspects of Oracle's behavior.

Also I hope that the posts could help finding problems and [possible] solutions easier on the web, by simply spreading the word, and increasing the exposure of posts' subjects to search engines such as Google.

Therefore I would be grateful if the reader could remind me of a neglected [perhaps earlier] reference to the subject, by leaving a comment.

Most importantly, the posts should by no means be regarded as a manifestation of knowledge, nor experience. They are merely results of experiments that I have been forced to carry out, mainly by chance, and influenced by the projects that I'm involved in, exams I've taken, or curiosity.

It's up to you, the reader, to re-establish what's being presented in this blog as facts, and I humbly ask you to leave a comment, if you find an incorrect or incomplete piece of information on a post.

Babak Tourani

Friday, 20 February 2015

Parallel Processing: the Damien Hirst of SQL Tuning Art World

After spending a decade of my life among people whom I eventually came to regard generally as imposters (i.e. journalists), I learned something important about myself: I'm a purist committed to "principles" and I hate bullshitters!

But even if you ignore the pointless self-glorification of the first paragraph, you must have understood from the title that I'm not a fan of Damien Hirst!

Taking The "SQL Tuning Expert" certification exam (1Z0-117) recently reminded me that perhaps Damien Hirst is not the only entity which -in my opinion- is placed in a context that it doesn't belong to; or AT LEAST maybe we shouldn't regard people like him as such "integral" parts of the art world.

The following is a review of subjects covered in the exam, and what was overemphasised.

Here It Goes!

Disclaimer: If you've found your way to this post hoping that you might find some exam dumps or samples, you can press "Back" on your browser RIGHT NOW!

My only aim is to share what I remember (I will update if/when I remember more) from the test about what needs to be learned so that an applicant can pass the exam to achieve "SQL Tuning Expert" certification, mainly because not even a sample test was provided by Oracle.

I divide this post to three parts:

  1. Main Subjects
  2. References and Resources
  3. A Demand for Keeping Things In Perspective

1. Main Subjects 

You can find the exam topics here, but I did not find it representative of the main emphasis points of the exam; so here's what the actual main (not all of them, and not in the order of emphasis in the exam!) topics:
  1. Reading Execution Plans
  2. Query Transformation
  3. Partition Pruning
  4. Parallel Processing
  5. DBMS_MONITOR/trcsess/tkprof 
  6. SQLTuningSets, SQL Tuning Advisor, and SQL Access Advisor
  7. SQL Profiles and SQL Plan Management

1.1. Reading Execution Plans


The plans that you come across in the exam are not complicated (especially the stats related issues, such as cardinality mismatches and manifestations of stale stats are generally obvious), and you must have seen them all, if you have gone through the Oracle documentation mentioned below under "References and Resources."

  • You should be loving this stuff! If you don't like cracking the execution plans code, why on earth are you taking this exam anyway?!
  • You should be familiar with different interpretation methods of execution plans in terms of finding out the chronology of the events and processes involved.
  • More or less every SQL Tuning case that is mentioned in the official exam topic is referenced in the exam through an execution plan.


 1.2. Query Transformation


  • When do either of View Merging, Subquery Unnesting and Query Rewrite happen?
  • When either of those can't happen?
  • What are the signs of each taking place, in an execution plan?
  • When would you see a "Projection View" during View Merging?
  • How do you distinguish a Bitmap Join an Star Transformation?


1.3. Partition Pruning


  • What is the sign of Partition Pruning NOT taking place on a partitioned table, in execution plan?
  • Generally how can predicates affect partition pruning in different partition methods? for example what can cause a Full Table Scan of a partitioned table? What kind of predicates will facilitate partition pruning in Range, List, Hash, or System partitioned tables?
  • Full/Partial partition-wise joins: When will they happen in parallel?
  • You need to keep an eye on the "IN-OUT" column of the execution plans on parallel partition-wise joins and "S -> P" operations: not all the tables are read in parallel!
  • What is System Partitioning?


1.4. Parallel Processing


You need to know about the following concepts:

  • How do different values of PARALLEL_DEGREE_POLICY(AUTO|MANUAL|LIMITED) affect the behaviour of the database?
  • How the DOP for a SQL statement is calculated under either of the modes mentioned above?
  • Under which circumstances the Auto DOP is used?
  • How is DOP calculated in each mode?
  • In each mode of PARALLEL_DEGREE_POLICY  when a statement is run in parallel; i.e. using which syntax (i.e. hint) or using which parallel settings for the segments involved?
  • Under which conditions parallel SQL statements are queued?
  • How/when does the value of PARALLEL_MIN_TIME_THRESHOLD affects the execution of a SQL statement?
  • Under which circumstances CTAS and INSERT ... SELECT can run in parallel? and when does the "SELECT" part of both operations does [not] run in parallel?
  • What could cause either an ORA-7454 or ORA-12827 during parallel execution?
  • This is a tricky one: When does a direct-path insert into an IOT NOT take place in parallel even if IOT is created in parallel mode?
  • How can Resource Management be deployed to provide services with different parallel processing characteristics (e.g. statement queuing)?
  • Under which mode of PARALLEL_DEGREE_POLICY, and how does lack of I/O Calibration affect parallel processing?
  • When should PARALLEL(MANUAL) hint be used?
  • When can In-Memory parallel execution be employed? Where is the data kept in such environment? (PGA vs. Buffer Cache)
  • Under which circumstances DML statements can't be executed in parallel?
  • How does parallel execution in RAC takes place?
  • What's the effect of either of the following parameters?


1.5. DBMS_MONITOR/trcsess/tkprof


  • Which kinds of trace (i.e. which DBMS_MONITOR procs) could produce multiple trace files?
  • You need to be familiar with the sequence of actions from generating trace to using trcsess and eventually tkprof.
  • Which program consolidates trace from different trace files? trcsess? or tkprof?
  • How does tkprof treat recursive calls?
  • tkprof parameters; for example what options are available for sorting the output items?
  • How many trace files are produced by for each


 1.6. SQL Tuning Sets, SQL Tuning Adviso, SQL Access Advisor 


  • What can be kept in an STS? The syntax of different STS management operations do not matter, as much as a full grasp on the process of STS management matters.
  • What are the data sources for Automatic and Manual invocations of SQL Tuning Advisor?
  • What are the sources for SQL Access advisor?
  • What is the functional relationship between STS, Tuning Advisor, and Access Advisor? Which one is used by which?
  • What are the output of Tuning Advisor and Access Advisor? for example which one issues recommendation regarding deploying materialized views?
  • What is the role of Automatic Tuning Optimizer?


1.7. SQL Profiles and SQL Plan Management


  •  Which initialization parameters would facilitate the discovery/storage and using of the plans by optimizer?
  • What is the role of Automatic Tuning Optimizer in an ecosystem which also includes SQL Tuning Advisor, SQL Profiles, and SPM mechanism? How are the dots connected to each other? You need to know the data flow between these elements.
  • What are the different steps in the life cycle of a plan? How does a "Fixed" plan differ from an "Accepted" one?
  • What happens when a plan changes? What does "Evolution" entail?
  • How do you manage different plans for different purposes? e.g. Batch processing vs.other times


2. References and Resources


I suggest the following resources would be adequate for passing the certification exam:

  • Database SQL Tuning Guide in here
  • Database VLDB and Partitioning Guide in here
  • Troubleshooting Oracle Performance, 2nd Ed (2014), by Christian Antognini via Apress

 3. A Demand for Keeping Things In Perspective


Although the subject of Parallel Processing is mentioned in less than 20% of subjects outlined in the official exam topics, it actually was referenced in about half of the 75 questions in one way or other! So if you're not proficient in concepts of parallel processing, there's a big chance that you will not pass this exam.

Implications of such huge emphasis on the subject are somewhat confusing; especially considering the fact that "throwing resources at a problem" is not exactly an example of "Tuning."

Parallel Processing is as much of a "SQL Tuning" technique that Damien Hirst is a representative of Art, compared to artists such as Picasso, Dali, or even Anthony Gormley.

If -like me- you have a background which makes you prone to being open to conspiracy theories, you could even call it "a sign of Oracle's hidden agenda to drive the demand for more beefy ('Engineered') servers!"

But less naive exam takers could simply refer to few references to the concept of Resource Management in the exam, and simply call it "disproportionate" to the number of questions about Parallel Processing.

I believe the subject of Parallel Processing could be addressed in an exam such as Performance Tuning (1Z0-054 or 1Z0-064), AND alongside concepts like instance caging.

At least I tend to think that "SQL Tuning" is more of an art form, and Parallel Processing is the sledge hammer that Rodin never used and never would have used for creating "The Thinker."

Update: I'd like to thank Horia Berca for guiding me to slides from Tom Kyte's Optimizer Masterclass. The Optimizer Masterclass is not the preparation/training course for 1Z0-117, but It made me happy to see how the amount of emphasis that one of Oracle's gurus has put on the subject of Parallel Processing seems to bolster my position regarding the subject, and what I've expressed in this post.