RMAN: The learning continues

June 30, 2017

Recently, we were asked by a client to become the caretakers of a new consolidation platform where, at any given time, there would be upwards of 15 databases running on an OVM Linux host (Disclaimer: While multi-tenant would definitely be beneficial, it is not in scope).

 

Here's where the interesting bit starts. Historically, the client uses TSM with the data provider for Oracle for database backups, and their original strategy was to have 3 scripts per database environment (One for full, another for incremental and the last for archivelogs). With 17 databases as a starting point, that would mean 51 scripts to configure and maintain! One of my motto's, i.e. scale, certainly doesn't factor into this equation!

 

Because there is a strict separation of duty, i.e. the backup team are responsible for the scheduling and successful completion of backups, configuring the backups through Oracle Enterprise Manager (Which was my first option) was off the table. We needed to have scripts on the server, which would be called by the TSM agent on the host. Fortunately, the TSM scheduling mechanism allows parameter values to be passed, which got me working on a plan to simplify things.

 

At the end of the day, I implemented two RMAN features which I haven't used before. 

 

  1. Virtual Catalog

    • With multiple business units planned to use the new environment, separation of RMAN catalogs makes sense in order to ensure that DBA's from the various business units are protected from accidentally viewing/trashing the others' information.

  2. RMAN scripts stored in the catalog

    • This is the really nice part, as the stored scripts can accept parameters. This means ONE script for each backup type, and ONE location to make any changes that may arise.

 

I wanted to keep the actual RMAN script as simple as possible, and thus decided to set RMAN persistent settings per database. These would define the default device to use, specific settings required for TSM, how many channels to allocate, etc. This only needs to be set once-off, so whenever a new database is provisioned, these settings will simply be applied as part of the deployment.

 

Below is an sample of the persistent settings which have been altered.

 

 

Onto the next step, creating the stored script itself! The process is pretty straight-forward, and the online documentation covers all the little nuances you need to be cognizant of.

 

From my perspective, the most important consideration was to go with global stored scripts, as the intention would be to allow any database registered in the recovery catalog to be able to execute the appropriate RMAN command.

 

Below is a screengrab of the process to actually create the stored RMAN script

 

 

A word of caution, as the script accepts parameter values, each line needs to be added individually. Dummy values need to be entered upon each line with substitution variables being submitted, but they are reset when the script is called for an actual backup.

 

A little Linux scripting later, and I finally had a solution which would be able to scale. I've included the script below, but as with everything you get off the internet, this is provided as is and should be tested thoroughly in your own environment.

 

 

It's always nice to see the value of things which you may have known about, but never had the opportunity to make use of! I hope that this provides some insights into how to potentially streamline backups of large numbers of databases in your organization, if you have any thoughts/comments, please feel welcome to add them in below!

 

 

 

Please reload

Recent Posts

Please reload

Archive

Please reload

Tags

Please reload