Data Guard — it’s real it’s Oracle, you know what you’ve got!

Data Guard — it’s real it’s Oracle, you know what you’ve got!” said Joe Meeks (Director of High Availability Product Management at Oracle) in his closing statement during the Live Webcast presentation titled “Maximize Availability with Oracle Database 11g” held today.

The focus of this presentation was on “Active Data Guard Option” of Oracle 11g. In one sentence — Active Data Guard Option lets you open your Physical Standby database READ-Only all while redo-apply is taking place from primary. In contrast, in 10g, redo-apply halted when physical standby was opened READ-Only, the redo transportation was still taking place but it was not being applied in 10g. 11g bridges this gap and it does so very efficiently — as Joe Meeks said “With 11gR2 Active Data Guard – we are confident that latency is very low”.

Data Guard != ZERO Downtime because rolling upgrade still requires downtime. For example, according to Joe’s recollection of a major upgrade at UPS — their downtime going from 10g to 11g took ~ 4 minutes. To get ZERO downtime you must design your application around Oracle Streams.

In contrast to Data Guard, which uses media recovery (block by block), Oracle Streams are building logical change records using SQL. Steams are providing full featured replication features at your disposal, for example you can:

  • replicate a subset of database
  • perform transformations
  • replicate across platforms
  • setup multimaster replications
  • replicate to a database and have it replicate different subsets of data out

Data Guard is simpler, one way replication — focused on DR using media recovery process (time tested process). Physical Standby is very horizontal, transparent to the storage and application.

PS: Active Data Guard is only available for Physical Standby (block by block changes) and it requires an additional license.

August 19, 2009

Posted In: Data Guard

Tags: , , , , , ,

Making sense of ASH TIME_WAITED Units

It’s widely believed that ASH / v$active_session_history.TIME_WAITED is presented in microseconds. It sure seems like it, but something is amiss as my small test shows. Using two sessions I created a test table in one, inserted a row and then from another session tried to select the same row for update:

| session A > SQL> create table ash_time_waited_test (t number);
| session A > 
| session A > Table created.
| session A > 
| session A > SQL> insert into ash_time_waited_test values(1);
| session A > 
| session A > 1 row created.
| session A > 
| session A > SQL> commit;
| session A > 
| session A > Commit complete.
| session A > 
| session A > SQL> update ash_time_waited_test set t=2 where t=1;
| session A > 
| session A > 1 row updated.

| session B > SQL> !date
| session B > Thu Jul 30 18:20:15 PDT 2009
| session B > 
| session B > SQL> select * from ash_time_waited_test for update;

| session A > 
| session A > SQL> commit;
| session A > 
| session A > Commit complete.
| session A > 
| session A > SQL>
| session A > 

| session B > 
| session B >          T
| session B > ----------
| session B >          2
| session B > 
| session B > SQL> !date
| session B > Thu Jul 30 18:20:38 PDT 2009
| session B > 

What I found next was rather strange — the sum(TIME_WAITED) from ASH (v$active_session_history) did not add up to the total time session A kept the lock on the test table:

First we get the SQL_ID for session B (the waiter) so we can tie into ASH.SQL_ID later:

SQL> select sql_id from v$sql where sql_text = 'select * from ash_time_waited_test for update';

SQL_ID
-------------
fpm89fyrsf6gv

Now having the SQL_ID in hand we can dig into ASH to see what waits our session experienced:

SQL> col SAMPLE_TIME format a25
SQL> col event format a30
SQL> select sample_time,EVENT,WAIT_TIME,TIME_WAITED from v$active_session_history
  2  where sql_id = 'fpm89fyrsf6gv'
  3  order by sample_time;

SAMPLE_TIME               EVENT                           WAIT_TIME TIME_WAITED
------------------------- ------------------------------ ---------- -----------
30-JUL-09 06.20.25.751 PM enq: TX - row lock contention           0      489233
30-JUL-09 06.20.26.761 PM enq: TX - row lock contention           0      489244
30-JUL-09 06.20.27.761 PM enq: TX - row lock contention           0      489242
30-JUL-09 06.20.28.761 PM enq: TX - row lock contention           0      489238
30-JUL-09 06.20.29.761 PM enq: TX - row lock contention           0      489244
30-JUL-09 06.20.30.761 PM enq: TX - row lock contention           0      489242
30-JUL-09 06.20.31.761 PM enq: TX - row lock contention           0      489244
30-JUL-09 06.20.32.761 PM enq: TX - row lock contention           0      489253
30-JUL-09 06.20.33.761 PM enq: TX - row lock contention           0      489233
30-JUL-09 06.20.34.761 PM enq: TX - row lock contention           0      489245
30-JUL-09 06.20.35.761 PM enq: TX - row lock contention           0      489245

11 rows selected.

SQL>

Based on the output above we know that the first sample that trapped this wait occurred at “06.20.25.751 PM” and the last one at “06.20.35.761 PM”. The difference (delta) between these two timestamps is just over 10 seconds:

SQL> select e.SAMPLE_TIME - s.SAMPLE_TIME
  2  from v$active_session_history s,
  3       v$active_session_history e
  4  where s.sample_id = (select min(sample_id) from v$active_session_history where sql_id = 'fpm89fyrsf6gv')
  5    and e.sample_id = (select max(sample_id) from v$active_session_history where sql_id = 'fpm89fyrsf6gv')
  6  ;

E.SAMPLE_TIME-S.SAMPLE_TIME
---------------------------------------------------------------------------
+000000000 00:00:10.010

SQL>

So far so good. Now lets see what we get by summing up the TIME_WAITED:

SQL> select sum(TIME_WAITED) from v$active_session_history
  2  where sql_id = 'fpm89fyrsf6gv'
  3  ;

SUM(TIME_WAITED)
----------------
         5381663

Interesting? If the ASH.TIME_WAITED is measured in microseconds then the total wait time = 5381663/1000000 = 5.381663 seconds and as we know it’s not correct. In fact, we can prove this using another metric, the ELAPSED_TIME_* from DBA_HIST_SQLSTAT:

SQL> select ELAPSED_TIME_TOTAL, ELAPSED_TIME_DELTA,
  2  APWAIT_TOTAL, APWAIT_DELTA,
  3  CCWAIT_TOTAL, CCWAIT_DELTA
  4  from DBA_HIST_SQLSTAT
  5  where sql_id = 'fpm89fyrsf6gv'
  6  ;

ELAPSED_TIME_TOTAL ELAPSED_TIME_DELTA APWAIT_TOTAL APWAIT_DELTA CCWAIT_TOTAL CCWAIT_DELTA
------------------ ------------------ ------------ ------------ ------------ ------------
          11110172           11110172     11106973     11106973          254          254

SQL>

DBA_HIST_SQLSTAT is dead on — with Elapsed Time Total = 11110172/1000000 = 11.110172 seconds … So what’s happening with ASH then? Why such a large > 2 times discrepancy?

I have my theory — I think that because ASH sampling is taken every second half of that seconds is actually taken to process the data and for this reason the TIME_WAITED gets truncated. Whatever the reason is, one thing is for sure — ASH.TIME_WAITED, at least in this particular case, cannot be trusted.

The final question … how do you then use ASH? How do you arrive at a more or less accurate metrics, specifically the TIME_WAITED? One way is this — we know what ASH sampling takes place every second, so instead of relying on the buggy TIME_WAITED values we can rely on the actual number of times that a SQL_ID appeared in ASH … If a SQL_ID was trapped by ASH 10 consecutive times with WAIT_TIME=0 (Waiting) we then know for sure that it waited for at least 10 seconds …

UPDATE Aug 12, 2009

Graham Wood from Oracle explains the issue with ASH: TIME_WAITED in his paper Sifting through the ASHes (see Page 35). He also lists some other “ASH data gotcha’s” on page 27. What really caught my attention through, was page 36, which explains “ASH vs AWR/Statpack” and it specifically says that for “Counts/occurrence data” analysis you should go to AWR not ASH.

And it really makes sense, after all, ASH is sampled every second and while it sounds like “often enough” it really isn’t. All you have to do is run sql_trace on a busy session and you’ll see that there’s a lot that goes on between the 1 second intervals — all of it would be missed by ASH. ASH is great at putting spotlight on the “big issues”, not so great for subtle, sub second waits which could and do add up to a considerable wait time.

Bottom line for me is that there’s no substitute for the good, old sql_trace, but overall, ASH is a huge improvement over what we had or rather didn’t have before. It definitely replaces my own monitoring scripts which I’ve used for many years now, sampling v$session_wait where wait_time=0 every 5 seconds.

Sat: January 2 2016: Update!

I am building a new monitoring framework that:

  1. Samples Oracle ASH every N minutes and captures the live heartbeat of the database.
  2. Allows you to define thresholds on top of this sampling and notifies you if something is amiss.
  3. Saves all this data for historical purposes so that when AWR is wiped out you have solid performance metrics years later.
  4. Monitors Oracle LOCKS.
  5. Monitors Oracle SPACE.
  6. Delivers this functionality in an easy to deploy (single binary) distribution that only takes a minute to install on a new host.

Does this sound like something you’d like in your shop? If yes – then I’d like to hear from you – sign up for the mailing list below to start the discussion and get the private beta as soon as it’s ready (no SPAM here I guarantee it!):

EvenTorEx Mailing List

July 31, 2009

Posted In: Tuning

Tags: , , , , , ,

Notes from the 11g Upgrade & Best Practices hosted by Tom Kyte

I finally got out of the dungeon and made it to an Oracle hosted event titled “Oracle Technology Day11g Upgrade & Best Practices” which was hosted by Tom Kyte and Oracle Center of Excellence (COE) folks in Los Angeles on June 16 ’09. And it was definitely worth it. I got a chance to finally meet Tom Kyte face to face and I even had my original, WROX “Expert One-One-One” signed by him. The place was packed and I even heard that the event organizers had to turn some people down because there was no room left.

The presentation was focused on 11g upgrade — I guess Oracle has had it with us “conservative” DBAs that stick to our guns and don’t upgrade until the R2 is out. Well, they sure got my attention with the “Revolutionary New Capabilities brought to you by the upcoming 11g R2” … But before I get to that I figured I’ll run through the notes of the stuff that was most interesting to me as a DBA, so instead of just stuffing the notepad someplace where I will never find it I figured it would be beneficial to both you and I if I wrote it up here:

Interesting stuff to research:
Quiesce
Incident Packaging
Total Recall
Data Masking

Capture & Replay Workload AKA Real Application Testing:
You can now achieve realistic testing with Database Capture and Replay. DB Capture is capable of recording bind variables, SCN and order of operations on your production instance and then using DB Replay you can reproduce the same exact load on a test system which will allow you to compare metrics (v$..), test upgrades and even test capacity plan.

Database Capture is available as of 9.2.0.8/up and it can be used across platforms, so you can easily use Captured workload from 9.2.0.8 and replay it on 11g test instance. Replay only works on 11g (11.1.0.7 see Note:738538.1).

The core of Real Application Testing is SPA (SQL Performance Analyzer) [pronounced like “heath SPA”] and SQL Tuning Advisor. SPA can be used in conjunction with Replay to look for regressed SQL, allowing you to build acceptable optimizer plans for the new release of the database before you upgrade. Replay commits changes — use Flashback DB to revert back and replay. 10.2.0.5 Grid Control can automate the Replay and restore of the database using GUI.

11g R2 will be able to UP the workload so it can be effectively used for future capacity planning.

Advise: when making comparisons using these tools rely on CPU TIME and BUFFER GETS rather than ELAPSED TIME.

Replay clients are called WRC. You can start multiple WRC processes to simulate concurrency. Filters can be used during Capture to limit what is being recorded. 11g R2 will also have selective replay capability.

Data Recovery Advisor:
If you want a second pair of “eyes” or have no idea what to do when your database requires recovery — Data Recovery Advisor might be able to help. Of course this requires you to use RMAN to backup your database in the first place, but you should be already doing that anyway right? But it also requires RMAN Catalog …

Flashback Data Archive:
Flashback is designed to “cure” the “right command / wrong database syndrome“, where you urgently want to UNDO a mistake.

9i – brought to us Flashback Query
10g – Flashback Tables and DB
11g – brings Flashback Data Archive & Transactions

Flash Data Archive is enabled on a table by table basis, where once a table is placed in the Data Archive a shadow table is created where all changes are recorded and kept. It works through a new background process and has very little (if any) overhead because it mines the UNDO. This is the best option for auditing and should replace all of your DIY audit triggers. While Audit triggers can be disabled or dropped, once a table is placed in the flash data archive it cannot be dropped even by the SYS DBA – perfect AUDIT.

Revolutionary New Capabilities brought to you by the upcoming 11g R2:
Expect to hear a lot of buzz about 11g R2 by October ’09. 80% of the Open World “talk” will be about 11g R2. What can we expect? How about “Upgrade Database Application” Online using a new, “revolutionary” feature called “ADDITION“?

create ADDITION v2;
alter session set ADDITION = v2;
— from now on everything takes place in this new virtual schema called v2 —
[compile new PKGs]
[add indexes and tables ???] Tom mentioned later that ADDITIONs only work for things that “live” in SYSTEM, i.e. compiled and virtual stuff, so this might not work for tables and indexes …
[v1/original is still fully operational and running]
[test v2]
[pause DB]
alter database set schema XYZ to V2 addition; [not real syntax …]
DONE.

Obviously, this will only work if you use PL/SQL. V2 becomes the current ADDITION once it’s enabled. V1 can be kept around if you desire so. This is NOT a source code control — it’s only designed to easily deploy database applications without disrupting your online users.

11g Upgrade Best Practices:
Upgrading to 10g? Review Note:466181.1
Upgrading to 11g? Review Note:601807.1

Oracle support really, really wants you to use OCM (Oracle Configuration Manager). They also want you to use the new Metalink (I don’t use it just in case you wanted to know …).

USE DBUA (I agree). DBUA == easy. Manual Upgrade == hard.

Backup your optimizer stats — see Note:465787.1

Don’t set any optimizer_* or db_multiblock_* params that you found on google. This is BAD practice. Most of the default values are good for 99% of installations — don’t touch them.

Gather dictionary statistics before you upgrade, it can dramatically improve the time it takes to upgrade your database.

Post upgrade run:
execute DBMS_STATS.GATHER_SYSTEM_STATS(start); [not exact command]
execute DBMS_STATS.GATHER_SYSTEM_STATS(stop); [not exact command]

On 11g use AUTO_SAMPLE_SIZE for ESTIMATE_PERCENT — this is 5-6 times faster and close to 100% accurate.

Export AWR repository BEFORE the upgrade.

Plan Stability:
On 11g use Plan Stability feature — see DBMS_SPM. SPM allows you to create SQL Baseline where only known and accepted plans are used for specific SQL. This can be either automated or setup manually by DBA. SPM works in three phases:

  1. Capture (stored in SYSAUX)
  2. Selection (DBA marks what’s accepted)
  3. Evolution (EVAL all unverified plans to see if they are worth considering)

Capture: OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES = true
Selection: OPTIMIZER_USE_SQL_PLAN_BASELINES = true + The Plan must be ENABLED=YES AND ACCEPTED=YES to be used by SPM.
Evolution: Scheduled by DBA via EVAL procedure.

See Note:456518.1.

You can also pack the stored SQL Plan Baselines on DEV and then unpack them into PROD. Great thing to ask of your vendors — provide the SQL Plan Baselines with all your code.

If you are going to 11g from 10g you can utilize STS — create STS (SQL Tuning Set) on 10g, unload them to staging table and load them into 11g’s plan baseline.

Some Random Bits and funny Pieces:
Evolution of APEX:
Browse (good) -> WebDB (good) -> Portal (messy) -> dead end
few years later …
HTML DB (good) -> APEX (better)

The guy who wrote APEX (Mike Hichwa) is also largely responsible for existence of SQLPLUS and SQL Developer.

Bryn Llewellyn owns PLSQL.

DBMS_OUTPUT.put_line (danger)
sys.DBMS_OUTPUT.put_line (safe)

Tom Kyte:

This is a really big table. Really? How big? It’s really big — imagine the largest table and insert one more record into it — that’s how big.

Rexx is my favorite language …

:)

June 17, 2009

Posted In: Events

Tags: , , , , , , , , , , , , , , , ,

Oracle RAC’s share everything vs share nothing …

Google’s share nothing approach to application development has lead to the #1 search engine solution both in performance and functionality. Notice that I said “application development” because for the share nothing approach to work it needs to be built into the application from day one not as an afterthought.

On the other end of the spectrum we have ERP APPS where design with thousands of tables per module are the norm and the UNION ALL joins span a multi-page printout. In these types of applications Oracle RAC’s “share everything” approach is clearly superior, scratch that, it’s the only solution, period.

For an interesting read on this specific issue take a look at Kevin Closson’s post titled “Nearly Free or Not, GridSQL for EnterpriseDB is Simply Better Than Real Application Clusters. It is Shared-Nothing Architecture After All!” and a sort of reply to it by a blogger called “bonglonglong” titled “All in the assumptions“.

June 15, 2009

Posted In: RAC

Tags: , , ,

What the heck is “snapshot standby”?

I was just reviewing Note:565535.1 Flashback Database Best Practices & Performance and came across the following paragraph under the title of “Measuring Impact of turning Flashback Database ON / Analyzing the Performance Impact“:

If you’re on RDBMS release 11g and have a physical standby, then you can enable flashback database on your physical standby and then convert it to a snapshot standby (this can also be done in 10.2 with a manual snapshot standby, but no redo is received and applied as in the 11g Snapshot Standby. See Dell 10g Snapshot Standby – Dell 10g Snapshot Standby). Then when you run a workload against the snapshot standby it will be with flashback database logging enabled.

So what the heck is “snapshot standby”?

Essentially it allows you to convert your 11g physical standy to a “Snapshot standby” so that you can “Make changes to the Primary and the the Snapshot standby in some common tables” and then “Convert the Snapshot standby back to a Physical standby”. Makes sense? NO? OK, I know, how about this:

  • A snapshot standby database is a fully updateable standby database created by converting a physical standby database into a snapshot standby database.
  • A snapshot standby database receives and archives, but does not apply, redo data from a primary database. The redo data received from the primary database is applied once a snapshot standby database is converted back into a physical standby database, after discarding all local updates to the snapshot standby database.
  • A snapshot standby can be created from Enterprise Manager, the Data Guard Broker command line interface (DGMGRL) or from SQL*Plus.

Better? YES. This is probably THE reason to go 11g if you haven’t already. Basically you get an instant REPORTING/TEST server without loosing your Physical Standby database. See: Using Snapshot Standby for an example.

June 15, 2009

Posted In: Data Guard

Tags: , , ,

09′ IOUG Collaborate – notes for Oracle DBAs

I just reviewed Tom Kyte’s presentation from the 09′ IOUG Collaborate event and made a list of things that are specifically interesting to an Oracle DBA with the links for further research into each feature.

I. Those managing PHP/Oracle shops will be happy to learn that there’s a new Database Resident Connection Pool [DRCP] that performs as well as dedicated server (it really is) with benefits of shared server model.

II. Are you sill using TOAD? Perhaps it’s time to give Oracle Sql Developer a shoot because it’s offering a host of Sql/PLSQL Tuning Features built right in:

June 10, 2009

Posted In: Tuning

Tags: , , , , , ,

Formatting Corrupted Block Not Part of Any Segment

It’s been a while since my last post … but this issue I ran into last night is worth mentioning. Here’s what happened — one of my RMAN backups failed with:

   RMAN-00571: ===========================================================
   RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
   RMAN-00571: ===========================================================
   RMAN-03002: failure of backup plus archivelog command at 05/05/2009 23:16:15
   ORA-19566: exceeded limit of 0 corrupt blocks for file /u01/app/czgddata/a_txn_data02.dbf

The corrupt block was being reported as part of a segment — a table:

   SQL> connect / as sysdba
   Connected.
   SQL>
   SQL>
   SQL> select owner, segment_name, segment_type
   from   dba_extents
   where  file_id = 401
   and    257532 between block_id and block_id + blocks - 1;
   
     2    3    4
   
   OWNER
   ------------------------------
   SEGMENT_NAME
   --------------------------------------------------------------------------------
   SEGMENT_TYPE
   ------------------
   CZ
   CZ_DEVL_PROJECTS
   TABLE

But the data was still readable and I was able to move the table to “map the corrupt block” outside of the segment boundaries:

   SQL> alter table CZ.CZ_DEVL_PROJECTS move;
   
   Table altered.
   
   SQL> select count(*) from CZ.CZ_DEVL_PROJECTS;
   
     COUNT(*)
   ----------
          312

Still, even after the move the block was still reported corrupt by the dbv and I had to deploy a procedure from the ML-Note: 336133.1. Which I fully documented in here –> How to Format Corrupted Block Not Part of Any Segment.

May 6, 2009

Posted In: Operations

Tags: , ,

What I Need From Support

You’ve been there haven’t you? You buy a product, use it, have an issue, cringe and call support. What comes next either breaks or makes your future relations with the company and the product they sold you. It’s at this point that you find out if they are your partner or you are on your own. With consumer products my expectation for a good one is to never break — I don’t ever want to have to call their support. But when it comes to the toys we use in data centers I want to hit all of the big issues during QA and I want it to break down so badly that it will stump the technical support team so we get a chance to test their infrastructure and protocols.

That’s exactly what happened to one of the key products we selected for our latest large scale implementation. It’s irrelevant to this article what that product is, what’s relevant is that the support organization experience was disappointing. I am not going to focus on what went wrong — it really isn’t that important, what’s important is that we got the message across and it appears we’ll have a meeting with the people that might be able to make a difference. And I really hope that they do — I believe this company has potential.

I was thinking about the upcoming meeting and what to address and here’s what I realized. It’s actually very simple — here’s my list of things I Need From a Support Organization of a technology company:

  1. Community
  2. Communication
  3. Response

Community

Community is where you go first if you have a non urgent question. Community needs to consist of three components:

  1. Knowledge Base
  2. Bug Database
  3. Forums

Knowledge Base needs to offer an up to date list of known issues and white papers that bridge the gap between official documentation and what customers are actually going through while doing hands on implementations. Bug Database is where you get a chance to feel the pulse of the company — it’s an integral part of the community and it doesn’t have to be an open Bug database that we see in the open source communities, no, just a read-only access where you get to see what kinds of issues development is working on these days. Through Forums you share experiences with other customers and learn best practices by picking the brains of the community gurus. Forums is what connects you with the rest of the customer base and gives you a chance to see the bigger picture, it also shows that the company is open about it’s business and is not afraid of public scrutiny.

Communication

Communication is one of the most critical aspects of a support organization. It needs to flow both ways — customer to support and support to the customer. Support needs to drive Communication, they need to deliver instant updates to your issue and they need to squeeze the last drop of information from you because even the smallest piece of information can play a huge role in how effective a solution will be or how it will help other customers that might be facing similar symptoms but have not yet identified the cause of the problem. Communication is the only thing that allows a good support organization to create strong Knowledge Base because it’s the only way to gage what you are really experiencing in the field.

For example a failure of their product might have an adverse affect on other products within your technology stack so it’s imperative for support to ask you for the error messages that you might have seen with the rest of the components so that these symptoms can be properly documented and published in the knowledge base as a white paper or an alert.

Response

Response is the most critical aspect of the support organization. The worst thing you as a customer can experience is when your requests are being ignored or lost in the queue. You need to know as soon as possible that someone is looking into the issue and it better NOT be an boilerplate response from an automated system. Response needs to include action plan — it’s not enough to simply say “We are working on it” — it should provide an action plan even if it involves you — the customer. Response goes hand in hand with communication and it needs to be recorded into a system that you can login to view history of events. Just like Communication, Response goes both ways and support needs to drive it — if a customer is delaying response to a request for information support needs to follow up immediately.

And there you have it — Community, Communication, Response = effective support organization.

August 14, 2008

Posted In: Operations

Tags:

No more SUN

After running ORACLE on SUN for over 7 years and being the big proponent of it I have finally said enough is enough. Starting this summer our shop made a switch to Linux (HP) and so far I could not be happier. I just don’t get it with SUN anymore they keep digging a deeper whole by separating themselves further and further from ORACLE. And with ORACLE now offering Linux support I just don’t see why would someone not yet invested in SUN hardware would want to choose them over Linux.

Overpriced SUN Hardware is replaced by powerful and reasonably priced HP Boxes
Overpriced SUN Hardware is replaced by powerful and reasonably priced HP Boxes (Front)
Overpriced SUN Hardware is replaced by powerful and reasonably priced HP Boxes (Back)
Overpriced SUN Hardware is replaced by powerful and reasonably priced HP Boxes (Back)

August 7, 2008

Posted In: Linux

Tags: