Tuesday, December 27, 2016

Managing SecureFiles-Next Generation LOBs
-------------------------------------------------
Be aware that in Oracle Database 11g, the default type of LOB is still BasicFiles. However, in Oracle Database 12c, the default type of LOB is now SecureFiles.
 
Prerequisites for SecureFiles:
-    A SecureFile LOB must be stored in a tablespace using the automated segment space management feature (ASSM-segment space management auto).
-    The DB_SECUREFILE initialization must be either PERMITTED(default) or ALWAYS.
NEVER: The LOB is created as a BasicFile regardless of whether the SECUREFILE option is specified.
PERMITTED: SecureFile LOBs can be created.
ALWAYS: The LOB is created as a SecureFile type unless the underlying tablespace isn’t using ASSM.
IGNORE: The SECUREFILE option is ignored, along with any SecureFile settings.

If you use LOBs in Oracle Database 11g or higher, use the new SecureFiles architecture.
SecureFiles is the new LOB architecture going forward; provides new features such as compression, encryption, and deduplication

create table patchmain(
patch_id number
,patch_desc clob)
lob(patch_desc) store as securefile (tablespace lob_data);

If you don’t specify the SECUREFILE clause when creating a table with a LOB column, then by default the LOB is created as a BasicFile LOB.

SQL> insert into patchmain values(1,'clob text');
select
segment_name
,segment_type
,segment_subtype
from user_segments;

select
table_name
,segment_name
,index_name
,securefile
,in_row
from user_lobs;


Encrypting LOBs:
You can transparently encrypt a SecureFile LOB column (just like any other column). Before you use
encryption features, you must set up an encryption wallet.
The ENCRYPT clause enables SecureFile encryption using Oracle Transparent Data Encryption (TDE).

CREATE TABLE patchmain(
patch_id number
,patch_desc clob)
LOB(patch_desc) STORE AS SECUREFILE (encrypt)
tablespace inv_clob;

CREATE TABLE patchmain(
patch_id number
,patch_desc clob encrypt)
LOB (patch_desc) STORE AS SECUREFILE;

You can verify the encryption details by querying the DBA_ENCRYPTED_COLUMNS view:
select
table_name
,column_name
,encryption_alg
from dba_encrypted_columns;

If you’ve already created the table, you can alter a column to enable encryption:
alter table patchmain modify
(patch_desc clob encrypt);

You can also specify an encryption algorithm. For example:
alter table patchmain modify
(patch_desc clob encrypt using '3DES168');

You can disable encryption for a SecureFile LOB column via the DECRYPT clause:
alter table patchmain modify
(patch_desc clob decrypt);


Modify table to be encrypted using the alter table encrypt clause. The following example shows how you can enable encryption for SecureFiles LOBs:
SQL> ALTER TABLE SECURE_DOCS MODIFY (BLOB_column ENCRYPT USING 'AES192');

You can query the USER_/DBA_ENCRYPTED_COLUMNS view to confirm that LOB
columns are encrypted.
SQL> SELECT TABLE_NAME, COLUMN_NAME, ENCRYPTION_ALG, SALT
FROM USER_ENCRYPTED_COLUMNS;

Once a SecureFiles LOB column is encrypted, you can disable encryption using the decrypt
keyword. The decrypt option will convert the encrypted column to clear text, as shown here:
SQL> ALTER TABLE SECURE_DOCS MODIFY (BLOB_CONTENT DECRYPT)


Compressing LOBs:
If you’re using SecureFile LOBs, then you can specify a degree of compression. The benefit is that the
LOBs consume much less space in the database. The downside is that reading and writing the LOBs may
take longer.

CREATE TABLE patchmain(
patch_id NUMBER
,patch_desc CLOB)
LOB(patch_desc) STORE AS SECUREFILE
(COMPRESS LOW)
TABLESPACE inv_clob;

HIGH: Highest degree of compression. Incurs higher latency when reading and writing the LOB.
MEDIUM: Medium level of compression. Default value if compression is specified but with no degree.
LOW: Lowest level of compression. Provides the lowest latency when reading and writing the LOB.
COMPRESS clause isn’t specified: No compression is used if you don’t specify the COMPRESS clause.

ALTER TABLE patchmain
MODIFY LOB(patch_desc)
(COMPRESS HIGH);

You can change the compression algorithm by specifying the compress high syntax to the alter table command, as shown here:
SQL> ALTER TABLE SECURE_DUP_COMPRESSED_DOCS
MODIFY LOB (BLOB_CONTENT) (COMPRESS HIGH);

If you need to uncompress the LOB segment for one reason or another, you can use the nocompress keyword:
SQL> ALTER TABLE SECURE_DOCS_HIGH MODIFY LOB (BLOB_CONTENT) (NOCOMPRESS);

ALTER TABLE patchmain
MODIFY LOB(patch_desc)
(NOCOMPRESS);

Tip: Try to enable compression, deduplication, and encryption through a CREATE TABLE statement. If you use an ALTER TABLE statement, the table is locked while the LOB is modified.


Deduplicating LOBs:
If you have an application where identical LOBs are associated with two or more rows, you should
consider using the SecureFile deduplication feature. When enabled, this instructs Oracle to check when
a new LOB is inserted into a table and see whether that LOB is already stored in another row (for the
same LOB column). If it’s already stored, then Oracle stores a pointer to the existing identical LOB. This
can potentially mean huge space savings for your application.

CREATE TABLE patchmain(
patch_id NUMBER
,patch_desc CLOB)
LOB(patch_desc) STORE AS SECUREFILE
(DEDUPLICATE)
TABLESPACE inv_clob;


alter table patchmain
modify lob(patch_desc) (deduplicate);

Here’s another example that modifies a partitioned LOB to enable deduplication:
alter table patchmain modify partition p2
lob (patch_desc) (deduplicate lob);

If you decide that you don’t want deduplication enabled, use the KEEP_DUPLICATES clause:
alter table patchmain
modify lob(patch_desc) (keep_duplicates);



Creating a Partitioned LOB:
You can partition LOBs by RANGE, LIST, or HASH.
CREATE TABLE patchmain(
patch_id NUMBER
,region VARCHAR2(16)
,patch_desc CLOB)
LOB(patch_desc) STORE AS (TABLESPACE patch1)
PARTITION BY LIST (REGION) (
PARTITION p1 VALUES ('EAST')
LOB(patch_desc) STORE AS SECUREFILE
(TABLESPACE patch1 COMPRESS HIGH)
TABLESPACE inv_data1
,
PARTITION p2 VALUES ('WEST')
LOB(patch_desc) STORE AS SECUREFILE
(TABLESPACE patch2 DEDUPLICATE NOCOMPRESS)
TABLESPACE inv_data2
,
PARTITION p3 VALUES (DEFAULT)
LOB(patch_desc) STORE AS SECUREFILE
(TABLESPACE patch3 COMPRESS LOW)
TABLESPACE inv_data3
);

select
table_name
,column_name
,partition_name
,tablespace_name
,compression
,deduplication
from user_lob_partitions;

LOB compression can be enabled at the partition level:

SQL> create table docs_compressed_mix2
2 (document_id number,
3 blob_content blob,
4 document_category varchar2(55))
5 partition by list (document_category)
6 (
7 partition p_dba1 values ('Data Guard') lob(blob_content)
8 store as securefile(compress high),
9 partition p_dba2 values ('ASM') lob(blob_content)
10 store as securefile(compress medium),
11 partition p_dba3 values ('Java') lob(blob_content)
12 store as securefile(nocompress),
13 partition p_dba4 values (default) lob(blob_content)
14 store as securefile(compress high)
15* )
SQL> /

alter table patchmain modify partition p1
lob (patch_desc) (compress high);

The next example modifies a partitioned LOB so it doesn’t keep duplicate values (via the
DEDUPLICATE clause):
alter table patchmain modify partition p2
lob (patch_desc) (deduplicate lob);



Moving a LOB Column:
You can use the ALTER TABLE...MOVE...STORE AS statement to move a LOB column to a separate tablespace (from the table’s tablespace). Here’s the basic syntax:
alter table move lob() store as (tablespace
The next example moves the LOB column to the INV_CLOB tablespace:
alter table patchmain
move lob(patch_desc)
store as basicfile (tablespace inv_clob);

If the LOB column is populated with large amounts of data, you almost always want to store the LOB
in a tablespace separate from the rest of the table data. In these scenarios, the LOB data has different
growth and storage requirements and is best maintained in its own tablespace.


Adding a LOB Column:
SQL> alter table inv add(inv_image blob);
OR
alter table inv add(inv_image blob)
lob(inv_image) store as securefile(tablespace lob_data);

Removing a LOB Column:
Before you remove a column, consider renaming it so that you can better identify whether any
applications or users are still accessing it:
SQL> alter table patchmain rename column patch_desc to patch_desc_old;
After you determine that nobody is using the column, use the ALTER TABLE...DROP statement to drop it:
SQL> alter table patchmain drop(patch_desc_old);


Caching LOBs:
By default, when reading and writing LOB columns, Oracle doesn’t cache LOBs in memory. You can
change the default behavior by setting the cache-related storage options.

create table patchmain(
patch_id number
,patch_desc clob)
lob(patch_desc) store as (tablespace lob_data cache);

CACHE: Oracle should place LOB data in the buffer cache for faster access.
CACHE READS: Oracle should place LOB data in the buffer cache for reads but not for writes.
NOCACHE: LOB data shouldn’t be placed in the buffer cache. This is the default for both SecureFile and BasicFile LOBs.

You can verify the LOB caching with this query:
SQL> select table_name, column_name, cache from user_lobs;

Storing LOBs In and Out of Line:
By default, up to approximately 4000 characters of a LOB column are stored in line with the table row. If
the LOB is over 4000 characters, then Oracle automatically stores the LOB outside of the row data. The
main advantage of storing a LOB in row is that small LOBs (less than 4000 characters) require less I/O,
because Oracle doesn’t have to search out of row for the LOB data.

However, storing LOB data in row isn’t always desirable. The disadvantage of storing LOBs in row is
that the table row sizes are potentially longer. This can impact the performance of full-table scans, range
scans, and updates to columns other than the LOB column. In these situations, you may want to disable
storage in the row. For example, you explicitly instruct Oracle to store the LOB outside of the row with
the DISABLE STORAGE IN ROW clause:

create table patchmain(
patch_id number
,patch_desc clob
,log_file blob)
lob(patch_desc, log_file)
store as (
tablespace lob_data
disable storage in row);

If you want to store up to 4000 characters of a LOB in the table row, use the ENABLE STORAGE IN ROW clause when creating the table:
create table patchmain(
patch_id number
,patch_desc clob
,log_file blob)
lob(patch_desc, log_file)
store as (
tablespace lob_data
enable storage in row);


You can’t modify the LOB storage in a row after the table has been created. The only ways to alter
storage in row is to either move the LOB column or drop and re-create the table.
alter table patchmain
move lob(patch_desc)
store as (enable storage in row);



Migrating BasicFiles to SecureFiles
------------------------------------
You can migrate BasicFile LOB data to SecureFiles via one of the following methods:
1. Create a new table, load the data from the old table, and rename the tables.
2. Move the table.
3. Redefine the table online.

1. Creating a New Table:
create table patchmain_new(
patch_id number
,patch_desc clob)
lob(patch_desc) store as securefile (tablespace lob_data);

Next, load the newly created table with data from the old table:
SQL> insert into patchmain_new select * from patchmain;

Now, rename the tables:
SQL> rename patchmain to patchmain_old;
SQL> rename patchmain_new to patchmain;
When using this technique, be sure any grants that were pointing at the old table are reissued for the new table.

2. Moving a Table to a SecureFile Architecture:
alter table patchmain
move lob(patch_desc)
store as securefile (tablespace inv_clob);

You can verify that the column is now a SecureFile via this query:
SQL> select table_name, column_name, securefile from user_lobs;

3. Online Redefinition:
You can also redefine a table while it’s online via the DBMS_REDEFINITION package. Use the following steps to do an online redefinition:
a. Ensure that the table has a primary key. If the table doesn’t have a primary key, then create one:
alter table patchmain
add constraint patchmain_pk
primary key (patch_id);

b. Create a new table that defines the LOB column(s) as SecureFile:
create table patchmain_new(
patch_id number
,patch_desc clob)
lob(patch_desc)
store as securefile (tablespace lob_data);

c. Map the columns of the new table to the original table:
declare
l_col_map varchar2(2000);
begin
l_col_map := 'patch_id patch_id, patch_desc patch_desc';
dbms_redefinition.start_redef_table('DARL','PATCHMAIN','PATCHMAIN_NEW',l_col_map);
end;
/

d. Copy the data (this can take a long time if there are many rows):
set serverout on size 1000000
declare
l_err_cnt integer :=0;
begin
dbms_redefinition.copy_table_dependents(
'DARL','PATCHMAIN','PATCHMAIN_NEW',1,TRUE, TRUE, TRUE, FALSE, l_err_cnt
);
dbms_output.put_line('Num Errors: ' || l_err_cnt);
end;
/

e. Finish the redefinition:
begin
dbms_redefinition.finish_redef_table('DARL','PATCHMAIN','PATCHMAIN_NEW');
end;
/
You can confirm that the table has been redefined via this query:
SQL> select table_name, column_name, securefile from user_lobs;





Loading LOBs
-------------
Loading LOB data isn’t typically the DBA’s job, but you should be familiar with techniques used to
populate LOB columns. Developers may come to you for help with troubleshooting, performance, or space-related issues.

Loading a CLOB:
First, create an Oracle database directory object that points to the operating system directory in which the CLOB file is stored. This directory object is used when loading the CLOB. In this example, the Oracle directory object is named LOAD_LOB and the operating system directory is /home/oracle/scripts:
SQL> create or replace directory load_lob as '/home/oracle/scripts';

For reference, listed next is the DDL used to create the table in which the CLOB file is loaded:
create table patchmain(
patch_id number primary key
,patch_desc clob
,patch_file blob)
lob(patch_desc, patch_file)
store as securefile (compress low) tablespace lob_data;

This example also uses a sequence named PATCH_SEQ. Here’s the sequence-creation script:
SQL> create sequence patch_seq;

The following bit of code uses the DBMS_LOB package to load a text file (named patch.txt) into a CLOB
column. In this example, the table name is PATCHMAIN and the CLOB column is PATCH_DESC:
declare
src_clb bfile; -- point to source CLOB on file system
dst_clb clob; -- destination CLOB in table
src_doc_name varchar2(300) := 'patch.txt';
src_offset integer := 1; -- where to start in the source CLOB
dst_offset integer := 1; -- where to start in the target CLOB
lang_ctx integer := dbms_lob.default_lang_ctx;
warning_msg number; -- returns warning value if bad chars
begin
src_clb := bfilename('LOAD_LOB',src_doc_name); -- assign pointer to file
--
insert into patchmain(patch_id, patch_desc) -- create LOB placeholder
values(patch_seq.nextval, empty_clob())
returning patch_desc into dst_clb;
--
dbms_lob.open(src_clb, dbms_lob.lob_readonly); -- open file
--
-- load the file into the LOB
dbms_lob.loadclobfromfile(
dest_lob => dst_clb,
src_bfile => src_clb,
amount => dbms_lob.lobmaxsize,
dest_offset => dst_offset,
src_offset => src_offset,
bfile_csid => dbms_lob.default_csid,
lang_context => lang_ctx,
warning => warning_msg
);
dbms_lob.close(src_clb); -- close file
--
dbms_output.put_line('Wrote CLOB: ' || src_doc_name);
end;
/

You can place this code in a file and execute it from the SQL command prompt. In this example, the file that contains the code is named clob.sql:
SQL> set serverout on size 1000000
SQL> @clob.sql
Here’s the expected output:
Wrote CLOB: patch.txt
PL/SQL procedure successfully completed.




Loading a BLOB
---------------
Loading a BLOB is similar to loading a CLOB. This example uses the same directory object, table, and sequence from the previous example, which loaded a CLOB. Loading a BLOB is simpler than loading a CLOB because you don’t have to specify character-set information.
This example loads a file named patch.zip into the PATCH_FILE BLOB column:
declare
src_blb bfile; -- point to source BLOB on file system
dst_blb blob; -- destination BLOB in table
src_doc_name varchar2(300) := 'patch.zip';
src_offset integer := 1; -- where to start in the source BLOB
dst_offset integer := 1; -- where to start in the target BLOB
begin
src_blb := bfilename('LOAD_LOB',src_doc_name); -- assign pointer to file
--
insert into patchmain(patch_id, patch_file)
values(patch_seq.nextval, empty_blob())
returning patch_file into dst_blb; -- create LOB placeholder column first
dbms_lob.open(src_blb, dbms_lob.lob_readonly);
--
dbms_lob.loadblobfromfile(
dest_lob => dst_blb,
src_bfile => src_blb,
amount => dbms_lob.lobmaxsize,
dest_offset => dst_offset,
src_offset => src_offset
);
dbms_lob.close(src_blb);
dbms_output.put_line('Wrote BLOB: ' || src_doc_name);
end;
/

You can place this code in a file and run it from the SQL command prompt. In this example, the file that contains the code is named blob.sql:
SQL> set serverout on size 1000000
SQL> @blob.sql

Here’s the expected output:
Wrote BLOB: patch.zip
PL/SQL procedure successfully completed.




References:
http://www.oracle.com/technetwork/articles/sql/11g-securefiles-084075.html
http://dbashahid.blogspot.com/search?updated-max=2012-06-27T00:24:00-07:00&max-results=7&start=33&by-date=false(Insert an Image File into Oracle Database)
http://oracle-base.com/articles/8i/export-blob.php
http://oracle-base.com/articles/8i/export-clob.php
http://oracle-base.com/articles/8i/import-blob.php
http://oracle-base.com/articles/8i/import-clob.php
http://oracle-base.com/articles/8i/intermedia-import-export-of-images.php
Master Note: Overview of Oracle Large Objects (BasicFiles LOBs and SecureFiles LOBs) (Doc ID 1490228.1)

Wednesday, December 21, 2016

Managing hints
----------------------------
Metalink Note:221970.1

Some of the reasons to place hints in SQL are to change the access path to the database, change the join order or join type for queries that are doing joins, hints for DML, and hints for data warehouse–specific operations, to name a few.

Place your hint into the statement using the /*+ hint */ syntax—for example:

Specify the NO_INDEX hint to let the optimizer disallow the use of a certain index:
SQL> select /*+ NO_INDEX(employees emp_emp_id) */employee_id
from employees
where employee_id >200;

If you just specify the NO_INDEX hint without listing any indexes, the optimizer will ignore all indexes on the table you specify.
You can also explicitly state the name of the index you wish to bypass:
select /*+ no_index(emp emp_i2) */ empno, ename, deptno
from emp
where deptno = 1;

Be sure to leave a space following the plus sign. The /*+ sequence is exactly three characters long,
with no spaces.
It is good practice
in using hints to be as specific as possible when instructing the optimizer what to do.

Forcing index:
----
SQL> select /*+ index (employees emp_dept_idx) +/
employee_id, department_id from employees;
where department_id > 50;


You can also specify the INDEX hint without specifying an index, as shown here:
SQL> select /*+ index (employees) +/
employee_id, department_id from employees;
where department_id > 50;
Since the INDEX hint in this example doesn't specify an index to use, the optimizer has the leeway to
select the best index—the index resulting in the least cost.


If you want to specify more than one index within the INDEX hint, Oracle recommends that you
specify the INDEX_COMBINE hint rather than the INDEX hint. You specify the INDEX_COMBINE hint in the
following way:
SQL> select /*+ index_combine(e emp_manager_ix emp_department_ix) */ *
from employees e
where manager_id = 108
or department_id=110;
In this case, the optimizer will use the combination of the two indexes you specify that has the lowest
cost. If you specify the INDEX_COMBINE hint without providing a list of indexes, the optimizer will use the
best combination of indexes based on its cost estimates.



Use the FULL hint to instruct the optimizer to choose a full table scan instead of an index scan.
SELECT /*+ full(emp) */ * FROM emp;

The foregoing hint instructs the optimizer to bypass the use of any possible indexes on the EMP table,
and simply scan the entire table in order to retrieve the data for the query.

OR

SELECT /*+ FULL */ col1,col2....
   FROM   SMALL
   WHERE  ID = 2;
OR
select /*+ FULL(emp) */ empno, ename, deptno
from emp
where deptno = 1;

In this example, if there were an index on the DeptNo column, a full table scan would be performed.

it has the same effect as the
FULL hint:
select /*+ FULL(emp) */ empno, ename, deptno
from emp
where deptno = 1;


For example, if we want to get the names of all employees that received a commission, our query would look like this:
SELECT ename, comm FROM emp
WHERE comm > 0;

----------------------------------
| Id | Operation | Name |
----------------------------------
| 0 | SELECT STATEMENT | |
| 1 | TABLE ACCESS FULL| EMP |
----------------------------------

As noted in the explain plan, no index is used. We happen to know there is a composite index on the
SAL and COMM columns of our EMP table. We can add a hint to use this index to gain the benefit of having
an index on the COMM column, even though it is not the leading column of the index:

SELECT /*+ index_ss(emp emp_i3) */ ename, comm FROM emp
WHERE comm > 0;

----------------------------------------------
| Id | Operation | Name |
----------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | TABLE ACCESS BY INDEX ROWID| EMP |
| 2 | INDEX SKIP SCAN | EMP_I3 |
----------------------------------------------




The FULL hint specifies that we want Oracle to perform a full table scan. A hint can be applied to SELECT, INSERT, UPDATE and DELETE statements. They are specified by placing the hint immediately after the SELECT, INSERT, UPDATE or DELETE statement in a comment. The comment must have a + (plus) sign immediately following the open comment indicator.

For example....

   SELECT /*+ FULL */ *
   SELECT /*+ ORDERED */ *
   SELECT --+ INDEX( .. )
  
   SELECT /*+ leading(dept, emp) */ ename, deptno
    FROM emp JOIN dept USING(deptno);
   
   
    SELECT /*+ use_nl(emp, dept) */ ename, dname
FROM emp JOIN dept USING (deptno);            --To invoke a nested loops join, use the USE_NL hint

To invoke a hash join, use the USE_HASH hint, and place both tables needing the join within parentheses
inside the USE_HASH hint:
SELECT /*+ use_hash(emp_all, dept) */ ename, dname
FROM emp_all JOIN dept USING (deptno);



Join Methods and Their Hints
----------------------------
Nested loops      USE_NL /NO_USE_NL /USE_NL_WITH_INDEX
Nested loops joins are efficient when processing a small
number of rows. The optimizer chooses a driving table, which
is the “outer” table in the join. For each row in the outer table,
each row in the inner table is searched.

Hash              USE_HASH /NO_USE_HASH
Hash joins are efficient when processing a large number of
rows. Hash joins are used only for equijoins.

Sort merge        USE_MERGE /NO_USE_MERGE
A sort merge join is ideal for pre-sorted rows and full table
scans. The sort merge join is used for non-equality joins. Both
tables are sorted on the join key, and then merged. It outperforms
nested loops joins for large sets of rows.



SQL> show parameter optimizer_mode
NAME TYPE VALUE
-------------------- -------------------- --------------------
optimizer_mode string ALL_ROWS

SELECT /*+ first_rows */ *
FROM employees NATURAL JOIN departments;


If we needed the reverse situation, and the database’s default optimizer_mode was set to FIRST_ROWS,
we can supply an ALL_ROWS hint to tell the optimizer to use that mode when determining the execution
plan:

SQL> alter system set optimizer_mode=first_rows scope=both;

SELECT /*+ all_rows */ *
FROM employees NATURAL JOIN departments;





Maximizing Data Loading Speeds
------------------------------
Use a combination of the following two features to maximize the speed of insert statements:
- Set the table’s logging attribute to NOLOGGING; this minimizes the generation redo for direct path operations (this feature has no effect on regular DML operations).(Use it when there is no DATAGUARD configured)

The NOLOGGING feature never affects redo generation for regular INSERT, UPDATE, MERGE, and DELETE statements.

If you’re using RMAN to back up your database, you can report on unrecoverable datafiles via the REPORT UNRECOVERABLE command.

SQL>SELECT
TABLE_NAME
,LOGGING
FROM USER_TABLES
WHERE TABLE_NAME = 'F_REGS';

SQL> ALTER TABLE F_REGS NOLOGGING;



You are doing a DML INSERT statement, and it is performing slower than needed. You want to optimize
the INSERT statement to use a direct-path insert technique.

- Use a direct path loading feature, such as the following:
    - INSERT /*+ APPEND */ on queries that use a subquery for determining which records are inserted.
    - INSERT /*+ APPEND_VALUES */ on queries that use a VALUES clause
    - CREATE TABLE…AS SELECT
   
The APPEND hint works with an INSERT statement only with a subquery; it does not work with an
INSERT statement with a VALUES clause. For that, you need to use the APPEND_VALUES hint.

SQL>INSERT /*+ APPEND */ INTO F_REGS
SELECT * FROM REG_MASTER;

This is appropriate for when you need to copy a
large volume of rows between tables. By bypassing the Oracle database buffer cache blocks and
appending the data directly to the segment above the high water mark, it saves significant overhead.
This is a very popular method for inserting rows into a table very quickly.

When you specify one of these hints, Oracle will perform a direct-path insert. In a direct-path insert,
the data is appended at the end of a table, rather than using free space that is found within current
allocated blocks for that table.


INSERT /*+ append_values */ INTO emp_dept
VALUES (15867234,'Smith, JR','Sales',1359,'2010-01-01',200,5,20);

If you want to bypass performing direct-path operations, you can use the NOAPPEND hint.   
   

      
Enabling Query Rewrite:
You have materialized views in your database environment, and want to have queries that access the
source tables that make up the materialized views go against the materialized views directly to retrieve
the results.

The REWRITE hint can be used to direct the optimizer to use a materialized view. The materialized view
must have query rewrite enabled, and statistics for the materialized view and the associated objects
should be current to increase the likelihood for a query to be rewritten. See the following example:

SELECT /*+ rewrite(dept_sal_mv) */ department_id,
sum(nvl(salary+(salary*commission_pct),salary)) total_compensation
FROM employees
GROUP BY department_id
having sum(nvl(salary+(salary*commission_pct),salary)) > 10000
ORDER by 2;

We can see here that the optimizer used the materialized view in the execution plan, rather than
processing the entire query and recalculating the summary.


Conversely, you can also use a NOREWRITE hint if, for some reason, you do not want the optimizer to
use the materialized view. One possible reason is that the data in the materialized view is stale compared
to the source table(s), and you want to ensure you are getting the most current data. Here we can see
that the optimizer bypassed the use of the materialized view and resummarized the data directly from
the EMPLOYEES table:

SELECT /*+ norewrite */ department_id,
sum(nvl(salary+(salary*commission_pct),salary)) total_compensation
FROM employees
GROUP BY department_id
having sum(nvl(salary+(salary*commission_pct),salary)) > 10000
ORDER by 2;




Enabling Parallelism for a Specific Query
-----
You have a slow-running query accessing data from a large table. You want to see if you can speed up the
query by instructing Oracle to use multiple processes to retrieve the data.

SELECT /*+ parallel(emp,4) */ empno, ename
FROM emp;

The hints in the preceding two queries result in four processes dividing the work of reading rows
from the EMP table. Four processes working together will get the job done faster in terms of wall-clock
time than one process doing all the work by itself.


Parallel Hints for Indexes:
Specify the parallel_index hint to control parallel access to indexes. You can generally access an index
in parallel only when the index is a locally partitioned index. In that case, you can apply the
parallel_index hint. Here’s an example:

SELECT /*+ parallel_index(emp, emp_i4 ,4) */ empno, ename
FROM emp
WHERE deptno = 10;



If parallelism isn’t being used, you can insert the parallel hint, and then rerun the explain plan to
verify that the optimizer will use parallelism in the execution plan—for example:
select /*+ parallel(emp,4) */ * from emp;

As of Oracle 11g Release 2, the NOPARALLEL and NOPARALLEL_INDEX hints have been deprecated.
Instead, use NO_PARALLEL and NO_PARALLEL_INDEX.



In the following example, we’ve specified a DOP of 4 on the EMP
table:
CREATE TABLE EMP
(
EMPNO NUMBER(4) CONSTRAINT PK_EMP PRIMARY KEY,
ENAME VARCHAR2(10),
JOB VARCHAR2(9),
MGR NUMBER(4),
HIREDATE DATE,
SAL NUMBER(7,2),
COMM NUMBER(7,2),
DEPTNO NUMBER(2) CONSTRAINT FK_DEPTNO REFERENCES DEPT
)
PARALLEL(DEGREE 4);

By placing a static DOP of 4 on the table, any user accessing the EMP table will get a DOP of 4 for each
query executed.
select * from emp;

CREATE TABLE EMP_COPY
PARALLEL(DEGREE 4)
AS
SELECT * FROM EMP;


The following example alters the default DOP for a table:
ALTER TABLE EMP
PARALLEL(DEGREE 4);

If, after a time, you wish to reset the DOP on your table, you can also do that with the ALTER
statement. See the following two examples on how to reset the DOP for a table:
ALTER TABLE EMP
PARALLEL(DEGREE 1);

OR

ALTER TABLE EMP
NOPARALLEL;


The following example shows how to
change the default DOP for an index:
ALTER INDEX EMP_I1
PARALLEL(DEGREE 4);

As with tables, you can reset the DOP on an index either of the following two ways:
ALTER INDEX EMP_I4
PARALLEL(DEGREE 1);

ALTER INDEX EMP_I4
NOPARALLEL;




Implementing Parallel DML:
You want to induce parallelism when performing DML operations (INSERT, UPDATE, MERGE, DELETE) in
order to speed performance and reduce transaction time.


Parallel DML is disabled by default on a database, and must be
explicitly enabled with the following statement:
ALTER SESSION ENABLE PARALLEL DML;

INSERT /*+ PARALLEL(DEPT,4) */ INTO DEPT
SELECT /*+ PARALLEL(DEPT_COPY,4) */ * FROM DEPT_COPY;

INSERT /*+ PARALLEL(DEPT,4) */ INTO DEPT
SELECT * FROM DEPT_COPY;

In order to take full advantage of parallel capabilities, try to parallelize all portions of a statement. If
you parallelize the INSERT but not the SELECT, the SELECT portion becomes a bottleneck for the INSERT,
and the INSERT performance suffers.


UPDATE /*+ PARALLEL(EMP,4) */ EMP
SET SAL = SAL*1.01
WHERE DEPTNO=10;




ALTER INDEX EMP_COPY_I1
REBUILD
PARALLEL(DEGREE 4);


If you need to rebuild a partition of a large local index, you can also use parallelism to perform this
operation. See the following example:
ALTER INDEX emppart_i1
REBUILD PARTITION emppart2001_p
PARALLEL(DEGREE 4);
Managing and Compare two tables data
------------------------------------
Toad -> Compare Data (Data Source(LE)/Data Source(LE2-use DB link) -> All Differences


This can be useful if 2 similar tables need to be compared across different schemas (or different databases, in which case a local view will need to be created using database link), or to compare a table with itself at different points in time, for example, before and after an update through Applications to track how a table is affected. The script indicates whether a record is present in one table but not in the other, or if the same key is present in both tables but non-key columns have different values.
Example tables and data to test

CREATE TABLE A (
    C1    NUMBER NOT NULL,
    C2    VARCHAR2(2000),
    C3    CHAR(255)
    );
   
CREATE UNIQUE INDEX A_INDEX ON A (C1);

CREATE TABLE B (
    C1    NUMBER NOT NULL,
    C2    VARCHAR2(2000),
    C3    CHAR(255)
    );
   
CREATE UNIQUE INDEX B_INDEX ON B (C1);

INSERT INTO A(C1, C2, C3) VALUES(1, 'A_ROW_001', 'TRUE');
INSERT INTO A(C1, C2, C3) VALUES(2, 'A_ROW_002', 'FALSE');
INSERT INTO A(C1, C2, C3) VALUES(3, 'A_ROW_003', 'TRUE');
INSERT INTO A(C1, C2, C3) VALUES(4, 'A_ROW_004', 'FALSE');

INSERT INTO B(C1, C2, C3) VALUES(1, 'B_ROW_001', 'TRUE');
INSERT INTO B(C1, C2, C3) VALUES(2, 'A_ROW_002', 'FALSE');
INSERT INTO B(C1, C2, C3) VALUES(3, 'B_ROW_003', 'TRUE');
INSERT INTO B(C1, C2, C3) VALUES(4, 'A_ROW_004', 'FALSE');

COMMIT;

(SELECT 'TAB1',C1,C2,C3
FROM SYS.A
MINUS SELECT 'TAB1',C1,C2,C3
FROM SYS.B)
UNION(
SELECT 'TAB2',C1,C2,C3
FROM SYS.B MINUS
SELECT 'TAB2',C1,C2,C3
FROM SYS.A)
ORDER BY C1;




SELECT id,title
FROM (
SELECT id, title FROM t1
UNION ALL
SELECT id,title FROM t2
) tbl
GROUP BY id, title
HAVING count(*) = 1
ORDER BY id;



select
       ADDRESS_TITLE, ADD_LINE1, ADD_LINE2, ADD_LINE3, PHONE_NO
       , MOBILE_NO, FAX_NO, E_MAIL_ADDRESS, CONTACT_PERSON, CITY_ID
       , STATE_ID, COUNTRY_ID, PARTY_CODE, ZONE_ID, DEFAULT_ADD
       , CREATED_BY, CREATION_DATE, LAST_UPDATED_BY, LAST_UPDATE_DATE
from   KMLG.PARTY_ADDRESS_LINES Tbl1
minus
select
       ADDRESS_TITLE, ADD_LINE1, ADD_LINE2, ADD_LINE3, PHONE_NO
       , MOBILE_NO, FAX_NO, E_MAIL_ADDRESS, CONTACT_PERSON, CITY_ID
       , STATE_ID, COUNTRY_ID, PARTY_CODE, ZONE_ID, DEFAULT_ADD
       , CREATED_BY, CREATION_DATE, LAST_UPDATED_BY, LAST_UPDATE_DATE
from   KMLG.PARTY_ADDRESS_LINES@"LINKDB_LE_LE2" Tbl2





Example
--------
--Find the difference in two tables:

select * from kmlg.CITY_MASTER
minus
select * from kmlg.CITY_MASTER@LINKDB_LE_LE2



insert into kmlg.CITY_MASTER@LINKDB_LE_LE2(col1,col2..)
select * from kmlg.CITY_MASTER
minus
select * from kmlg.CITY_MASTER@LINKDB_LE_LE2


--Truncate/delete test table

truncate table kmlg.CITY_MASTER@LINKDB_LE_LE2


--if child records exists then run the following:

--Very very important
--Find all child tables and enable|disable constraint:

SELECT a.owner CHILD_OWNER,
  a.constraint_name CHILD_CONSTRAINT,
  c.COLUMN_NAME CHILD_COLUMN,
  a.table_name CHILD_TABLE,
  a.r_owner PARENT_OWNER,
  a.r_constraint_name PARENT_REF_NAME,
  d.column_name PARENT_COLUMN,
  b.table_name AS PARENT_TABLE,
  a.delete_rule,
  'ALTER TABLE '||a.owner||'.'||a.table_name||' DISABLE CONSTRAINT '||a.constraint_name||';',
  'ALTER TABLE '||a.owner||'.'||a.table_name||' DISABLE CONSTRAINT '||a.constraint_name||';'
FROM dba_constraints a,
  dba_constraints b,
  dba_cons_columns c,
  dba_cons_columns d
WHERE
a.constraint_type='R'
AND a.r_constraint_name=b.constraint_name
AND a.constraint_name  =c.constraint_name
AND b.constraint_name  =d.constraint_name
and a.owner='KMLG'
and b.table_name='CITY_MASTER'
ORDER BY PARENT_TABLE;



--Run the DISABLE CONSTRAINT query, again truncate table then run insert again.





Managing and Determine the differences between two tables
------------------------------------------------------------
create table test (
  id number primary key,
  Mname varchar2(50),
  NName varchar2(50)
)

--Suppose this is the table containing new data for Insert or Updates,
--even duplicates that will be discarded
create table test_new (
  id number primary key,
  Mname varchar2(50),
  NName varchar2(50)
)

begin
insert into test values(1, 'pippo', 'pluto');
insert into test values(2, 'cacco', 'gnacco');
insert into test_new values(3, 'pappo', 'babbo');
insert into test_new values(2, 'caccone', 'gnacco');
insert into test_new values(1, 'pippo', 'pluto');
end;

commit;

--this is the data that need to be Insert or Updates
--It will show Non-Duplicate data
select * from (
  select p.*,
  k.mname old_mname, k.nname old_nname,
  CASE WHEN k.id IS NOT NULL THEN 'U' ELSE 'I' END AS delta_type
  from test_new p
  left outer join test k on
    --join condition based on pk
    k.id = p.id
) where delta_type = 'I'
--no pk columns where to detect changes
--any already present row will be discarded 
or old_nname != nname
or old_mname != mname





Find m to n rows from table
----------------------------
If you want to find a specific range or rows from a table, lets say all records from 35 to 65 from table YOUR_QUERY

CREATE TABLE YOUR_QUERY (ID NUMBER(3), col1 VARCHAR2(30));

BEGIN
    FOR i IN 1..100 LOOP
      INSERT INTO YOUR_QUERY VALUES (i, 'ROW' || i);
    END LOOP;
    COMMIT;
  END;

execute the following:

SELECT *
  FROM (SELECT A.*, ROWNUM rnum
          FROM (SELECT * FROM your_query) A
         WHERE ROWNUM <= 65 )
 WHERE rnum >= 35;






Remove bad 'child' records to enable or create a foreign key
-------------------------------------------------------------
If you get "Parent Keys Not Found" error when you try to enable/create relation from a child table to parent table, then you must identify and remove bad 'child' records.
The records can be identified and removed using the query:

        DELETE FROM childtablename ct
        WHERE NOT EXISTS (SELECT 'x' FROM parenttablename pt
        WHERE ct.keycolumn1 = pt.keycolumn1 AND ct.keycolumn2 = pt.keycolumn2...);




Finding Child constraints of table
----------------------------------
select 'alter table '||a.owner||'.'||a.table_name||
            ' disable constraint '||a.constraint_name||';'
            from all_constraints a, all_constraints b
            where a.constraint_type = 'R'
            and a.r_constraint_name = b.constraint_name
            and a.r_owner  = b.owner
            and b.table_name = 'NE_TRANSACTIONS';


References:
http://www.dba-oracle.com/t_convert_set_to_join_sql_parameter.htm
Managing and Delete duplicate records from a table
----------------------------------------------------
Toad -> Schema Browser (select table right click) -> Duplicate Data (Summary-view only) -> Duplicate Data (Editable)

To select all duplicate records from a table and delete them use the queries below:
table_with_duplicates(col1,col2,col3)

To get rid of the dupes, you want to identify the rows that are identical on -all- the fields in the table. So the query to ID the dupes would be something like:

1.Use subquery to delete duplicate rows

Here we see an example of using SQL to delete duplicate table rows using an SQL subquery to identify duplicate rows, manually specifying the join columns:

select * from table1 A
WHERE a.rowid > ANY (
                    SELECT B.rowid
                    FROM table1 B
                    WHERE A.id1= B.id1
                    and A.id2=B.id2
                    );
OR

2. Use self-join to delete duplicate rows
The most effective way to detect duplicate rows is to join the table against itself as shown below.

select
   book_unique_id,
   page_seq_nbr,
   image_key
from
   page_image a
where
   rowid >
     (select min(rowid) from page_image b
      where
         b.key1 = a.key1
      and
         b.key2 = a.key2
      and
         b.key3 = a.key3
      );

Please note that you must specify all of the columns that make the row a duplicate in the SQL where clause. Once you have detected the duplicate rows, you may modify the SQL statement to remove the duplicates as shown below:

delete from
   table_name a
where
   a.rowid >
   any (select b.rowid
   from
      table_name b
   where
      a.col1 = b.col1
   and
      a.col2 = b.col2
   )
;


OR

delete from
   table_name a
where
   a.rowid >
   any (select b.rowid
   from
      table_name b
   where
      (a.col1 = b.col1 or (a.col1 is null and b.col1 is null))
   and
      (a.col2 = b.col2 or (a.col2 is null and b.col2 is null))
   )
;


select empno
from table_with_duplicates
group by empno
having count(*) > 1

OR

select empno,count(*)
from table_with_duplicates
group by empno
having count(*) > 1

OR

SELECT col1,col2,col3,COUNT(*)
FROM table_with_duplicates
GROUP BY col1,col2,col3
HAVING COUNT(*) > 1;



OR
--show all duplicate rows
SELECT col1,col2,col3
FROM table_with_duplicates
WHERE (id) IN
(SELECT id
FROM table_with_duplicates
GROUP BY id
HAVING COUNT(*) > 1)
ORDER BY id



SELECT *
FROM TABLE A
WHERE EXISTS (
  SELECT 1
  FROM TABLE
  WHERE COLUMN_NAME = A.COLUMN_NAME
  AND ROWID < A.ROWID
)

OR
--Select the rowid's greater than the minimum rowid for each key column.
select empno
from employee a
where a.rowid >
            ( select min(rowid)
            from employee b
            where a.empno=b.empno
            );

OR

SELECT *
FROM table_with_duplicates a
WHERE ROWID NOT IN
                  (SELECT MAX(ROWID)
                  FROM table_with_duplicates b
                  where a.ID=b.ID
);

OR
SELECT * FROM table_with_duplicates a
WHERE ROWID NOT IN
                  (SELECT MIN(ROWID) FROM table_with_duplicates b
                  GROUP BY col1,col2,col3
);



DELETE FROM table_with_duplicates
WHERE ROWID NOT IN
                  (SELECT MAX(ROWID) FROM table_with_duplicates 
                  GROUP BY col1, col2, col3);

OR
--Remove duplicate line_detail_ids
DELETE from line_details
WHERE rowid NOT IN
                  (SELECT MAX(rowid)
                  FROM line_detail
                  GROUP BY line_detail_id)

OR
delete employee a
where a.rowid >
        (select min(rowid)
        from employee b
        where a.empno=b.empno);

References:

http://rdbms-insight.com/toolkit/del_dupes.php
https://cooldata.wordpress.com/2014/07/20/eliminate-your-duplicate-data-row-problems-with-simple-sql/
Managing performance using OSWatcher
------------------------------------
Installing OSW is simple and straightforward. Download the tar file from Metalink1 and untar the file at the appropriate location. For example, keeping a mount point for tools will help easy management.
tar xvf osw.nnn.tar
 
Since this is a shell script, collection can be done either as a foreground process or in the background. To enable data collection in the background, submit the primary script with the "nohup" option.
1. To start the OSW as a foreground process, use startOSW.sh.
2. To stop the OSW foreground process, use stopOSW.sh.
3. To submit the process in the background:
nohup ./startOSW.sh n1 n2 &
where n1 is the interval for each snapshot and n2 is the retention period.

4. To generate graphs for the data collected using OSWg, use
java -jar oswg.jar –I < complete path for the archive directory location>

OSWg requires a Windows type of server. 
 Note: In a RAC environment, OSWAT CHER needs to be installed and run on all nodes in the cluster.

Configuring OSW to Start Automatically After a System Start:
Step 1: Install the required RPM file to configure oswb as a service
[root@ssky1l4p1 downloads]# rpm -ivh oswbb-service-7.2.0-1.noarch.rpm

Step 2: Set up required parameters for auto start of OSW
[root@ssky1l4p1 ]# vi /etc/oswbb.conf
# Set OSW_ARCHIVE where the logs should be stored
OSW_ARCHIVE='archive'
# Set OSW_COMPRESSION to the desired compression scheme
OSW_COMPRESSION='gzip'
# Set OSW_HOME to the directory where your OSWatcher tools are installed
OSW_HOME='/u01/app/oracle/product /oswbb'
# Set OSW_INTERVAL to the number of seconds between collections
OSW_INTERVAL='30'
# Set OSW_RETENTION to the number of hours logs are to be retained
OSW_RETENTION='48'
# Set OSW_USER to the owner of the OSW_HOME directory
OSW_USER='oracle'
----------------------------
Step 3: Verify the chkconfig to ensure the OSW is enabled for auto start
[root@ssky1l4p1]# /sbin/chkconfig --list oswbb
oswbb 0:off 1:off 2:on 3:on 4:on 5:on 6:off

Step 4: Start OSW as a service
[root@ssky1l4p1]# /sbin/service oswbb start

Step 5: Stop OSW as a service
[root@ssky1l4p1]# /sbin/service oswbb stop


References:
https://blogs.oracle.com/db/entry/master_note_database_performance_overview
Master Note: Database Performance Overview (Doc ID 402983.1)
Master Note: SQL Query Performance Overview (Doc ID 199083.1)
OSWatcher Black Box User's Guide (Doc ID 1531223.1)
OSWatcher Black Box Analyzer User Guide (Doc ID 461053.1)
OSWatcher For Windows (OSWFW) User Guide (Doc ID 433472.1)
Metalink Note #: 301137.1 - O/S Watcher User Guide.  


352363.1 LTOM - The On-Board Monitor User Guide.

Managing Automatic startup and shutdown oracle on linux
--------------------------------------------------------
AUTOMATIC STARTUP AND SHUTDOWN OF ORACLE DATABASE INSTANCES [ID 61333.1]
222813.1

This is deprecated in 11g.

Oracle database server provides two scripts to configure automatic database startup and shutdown process.

The scripts are,
$ORACLE_HOME/bin/dbstart
$ORACLE_HOME/bin/dbshut

Now let's look at unix level script. When a unix machine boots it runs scripts beginning with Snnname in /etc/rc3.d.
The Sxxx scripts are invoked to start services when a Linux system boots.
-Here the number nn indicates the order in which these scripts will be run. The name just indicates the function of the script. Runlevel 5 is the runlevel at which the X Windows system runs, and it happens to be the default runlevel on our system. If you boot to a different runlevel, then you'll need to link to the appropriate directory, just like above.

In the same way shutdown scripts are named as Knnname which are run from /etc/rc0.d.
The Kxxx scripts are used to shut down processes before the system is halted.
If we want that Oracle is the last program that is automatically started, and it is the first to be shutdown then we will name the startup and shutdown scripts on OS like /etc/rc3.d/S99oracle and /etc/rc0.d/K01oracle respectively.

The database script dbstart and dbora will be called from OS script /etc/rc3.d/S99oracle and /etc/rc0.d/K01oracle respectively.

Note that dbstart and dbshut take each SID, in turn, from the /etc/oratab file and
startup or shutdown the database.

Automate Startup/Shutdown of Oracle Database on Linux:
------------------------------------------------------
Step 01: Be sure that oratab file is correct and complete.
Check for oratab file either in /etc/oratab or in /var/opt/oracle/oratab.
$ORACLE_SID:$ORACLE_HOME:[Y|N]
Here Y indicates that the database can be started up and shutdown using dbstart/dbshut script.

If in my database there is two database named arju and arjudup then my oratab file will contain the entry like,
arju:/var/opt/oracle/product/10.2.0/db_1:Y
arjudup:/var/opt/oracle/product/10.2.0/db_1:Y
where /var/opt/oracle/product/10.2.0/db_1 is the $ORACLE_HOME of my database.

Step 02: Create a script to call dbstart and dbshut.
In this example I will create one script that will do both startup and shutdown operation. I will name this script as dbora and will be placed in '/etc/init.d'.

a) Login as root.
b) cd /etc/init.d
c) Create a file called dbora and chmod it to 750.
# touch dbora
# chown dba:oinstall
# chmod 750 dbora
d) vi dbora

#!/bin/bash
#
# chkconfig: 35 99 10  
# description: Starts and stops Oracle processes
#
ORA_HOME=/var/opt/oracle/product/10.2.0/db_1
ORA_OWNER=oracle

case "$1" in
'start')

# Start the TNS Listener
su - $ORA_OWNER -c "$ORA_HOME/bin/lsnrctl start"
# Start the Oracle databases:
su - $ORA_OWNER -c $ORA_HOME/bin/dbstart
# Start the Intelligent Agent
if [ -f $ORA_HOME/bin/emctl ];
then
su - $ORA_OWNER -c "$ORA_HOME/bin/emctl start agent"
elif [ -f $ORA_HOME/bin/agentctl ]; then

su - $ORA_OWNER -c "$ORA_HOME/bin/agentctl start"
else
su - $ORA_OWNER -c "$ORA_HOME/bin/lsnrctl dbsnmp_start"
fi
# Start Management Server
if [ -f $ORA_HOME/bin/emctl ]; then
su - $ORA_OWNER -c "$ORA_HOME/bin/emctl start dbconsole"
elif [ -f $ORA_HOME/bin/oemctl ]; then
su - $ORA_OWNER -c "$ORA_HOME/bin/oemctl start oms"
fi
# Start HTTP Server
if [ -f $ORA_HOME/Apache/Apache/bin/apachectl]; then
su - $ORA_OWNER -c "$ORA_HOME/Apache/Apache/bin/apachectl start"
fi
touch /var/lock/subsys/dbora
;;
'stop')
# Stop HTTP Server
if [ -f $ORA_HOME/Apache/Apache/bin/apachectl ]; then

su - $ORA_OWNER -c "$ORA_HOME/Apache/Apache/bin/apachectl stop"
fi
# Stop the TNS Listener

su - $ORA_OWNER -c "$ORA_HOME/bin/lsnrctl stop"
# Stop the Oracle databases:
su - $ORA_OWNER -c $ORA_HOME/bin/dbshut
rm -f /var/lock/subsys/dbora
;;
esac
# End of script dbora


Failed to auto-start Oracle Net Listener using /ade/vikrkuma_new/oracle/bin/tnslsnr
This is due to a hard coded path in the dbstart script. To correct this, edit the "$ORACLE_HOME/bin/dbstart" script and replace the following line (approximately line 78):

ORACLE_HOME_LISTNER=/ade/vikrkuma_new/oracle
With the following line:
ORACLE_HOME_LISTNER=$ORACLE_HOME
OR
ORACLE_HOME_LISTNER=${1:-$ORACLE_HOME}

3. As root perform the following to create symbolic links:

# ln -s /etc/init.d/dbora /etc/rc3.d/S99oracle
# ln -s /etc/init.d/dbora /etc/rc0.d/K01oracle
OR perhaps
ln -s /etc/rc.d/init.d/dbora /etc/rc.d/rc0.d/K10dbora

Alternatively you can register the Service using
/sbin/chkconfig --add dbora

This action registers the service to the Linux service mechanism.

4. Test the script to see if it works.

The real test is to reboot unix box and then see whether oracle is started up automatically or not.

However to test the script created in step 2, without rebooting, do the following:

Login as root and then,
# /etc/init.d/dbora start (for startup)
# /etc/init.d/dbora stop (for shutdown)

There are log files created in your ORACLE_HOME directory named startup.log and shutdown.log. You can inspect the contents of these to verify that the shutdown and startup are working as expected.
If you restart start and stop oracle database is successful then you are almost done.





OR
Script 2
--------

A Script to Start and Stop Oracle Database and the Oracle Listener
--------------------------------------------------------------------
#!/bin/sh
# /etc/rc.d/init.d/oracle
# Description: The following script
# starts and stops all Oracle databases and listeners
case "$1" in
start)
echo -n "Starting Oracle Databases: "
date +"! %T %a %D : Starting Oracle Databases after
system start up." >> /var/log/oracle
echo "-------------------------------------" >> /var/log/oracle
su - oracle -c dbstart >> /var/log/oracle
echo "Done."
echo -n "Starting Oracle Listeners: "
380 CHAPTER 9 ¦ CREATING AN ORACLE DATABASE
su - oracle -c "lsnrctl start" >> /var/log/oracle
echo "Done."
echo ""
echo "--------------------------------------" >> /var/log/oracle
date +"! %T %a %D : Finished." >> /var/log/oracle
echo "--------------------------------------" >> /var/log/oracle
;;
stop)
echo -n "Shutting Down Oracle Listeners: "
echo "----------------------------------------" >> /var/log/oracle
date +"! %T %a %D : Shutting Down All Oracle Databases
as part of system shutdown." >> /var/log/oracle
echo "----------------------------------------" >> /var/log/oracle
su - oracle -c "lsnrctl stop" >> /var/log/oracle
echo "Done."
echo -n "Shutting Down Oracle Databases: "
su - oracle -c dbshut >> /var/log/oracle
echo "Done."
echo ""
echo "-----------------------------------------" >> /var/log/oracle
date +"! %T %a %D : Finished." >> /var/log/oracle
echo "-----------------------------------------" >> /var/log/oracle
;;
restart)
echo -n "Restarting Oracle Databases: "
echo "------------------------------------------" >> /var/log/oracle
date +"! %T %a %D : Restarting Oracle Databases
after system startup." >> /var/log/oracle
echo "------------------------------------------" >> /var/log/oracle
su - oracle -c dbshut >> /var/log/oracle
su - oracle -c dbstart >> /var/log/oracle
echo "Done."
echo -n "Restarting the Oracle Listener: "
su - oracle -c "lsnrctl stop" >> /var/log/oracle
echo "Done."
echo ""
echo "---------------------------------------------" >> /var/log/oracle
date +"! %T %a %D : Finished." >> /var/log/oracle
echo "---------------------------------------------" >> /var/log/oracle
;;
*)
echo "Usage: oracle {start|stop|restart}"
exit 1
esac

Managing Resumable Space Allocation
------------------------------------
Resumable space allocation allows you to temporarily suspend operations that run out of space while you correct
the space issue without aborting the operation.

You can explicitly make operations run in the Resumable Space Allocation mode by using the
ALTER SESSION command. The Resumable Space Allocation feature will just suspend operations
until the problem is fixed (such as by you adding a data file to extend space) and it will resume automatically
after that.


Resumable Operations
---------------------
Specific Data Definition Language (DDL), Import/Export, Data Manipulation Language
(DML), and query statements are candidates for resumable executions:
NN SELECT statements that run out of sort area temporary space
NN INSERT, UPDATE, DELETE, and INSERT INTO…SELECT
NN Export/import and SQL*Loader
NN The following DDL statements:
CREATE TABLE AS SELECT
CREATE INDEX
ALTER TABLE MOVE PARTITION
ALTER TABLE SPLIT PARTITION
ALTER INDEX REBUILD
ALTER INDEX REBUILD PARTITION
ALTER INDEX SPLIT PARTITION
CREATE MATERIALIZED VIEW
CREATE MATERIALIZED VIEW LOG



SQL> GRANT RESUMABLE TO salapati;
You can enable a session for Resumable Space Allocation in one of two ways—set the RESUMABLE_TIMEOUT initialization parameter, or use the ALTER SESSION command to enable and disable resumable space allocation.

To enable resumable operations for the instance, alter the instance parameter RESUMABLE_TIMEOUT to a numeric value greater than zero. The default value is 0, which in effect initially disables resumable operations for all sessions. This represents the number of seconds that
an operation may suspend while you take corrective action. After the time-out is reached, the
operation will abort.

For example, to enable all database sessions for Resumable Space Allocation for a period of two hours, you’d set the parameter this way:
ALTER SYSTEM SET RESUMABLE_TIMEOUT=7200 SCOPE=SPFILE;

OR

Using the ALTER SESSION Statement
---------------------------------
You can enable Resumable Space Allocation in your session regardless of whether you’ve set the RESUMABLE_TIMEOUT initialization parameter:
SQL> ALTER SESSION ENABLE |DISABLE RESUMABLE;   

The default resumable time-out for a session is 7,200 seconds.
OR

You can also set the timeout interval using the DBMS_RESUMABLE package, as follows:
SQL> EXECUTE DBMS_RESUMABLE.set_session_timeout(4349,18000);

In the preceding example, the first number in the parentheses, 4349, is the SID of the session
for which you want the timeout to be enforced. You can omit the SID if you’re setting the timeout for
the current session. The second number, 18000, is the timeout period.

SQL> select dbms_resumable.get_timeout from dual;
GET_TIMEOUT
-----------
7200


SELECT * FROM DBA_RESUMABLE;


Optional
--------
Create a trigger every time the user logins to enable the feature
CREATE OR REPLACE TRIGGER at_logon_set_resumable_session
AFTER LOGON ON helpdesk.SCHEMA
BEGIN
EXECUTE IMMEDIATE 'alter session enable resumable timeout 1200';
END;
/


Every time the user helpdesk logins and some transactions fail due to the lack of space a record is created at dba_resumable view.

SELECT USER_ID, SESSION_ID, STATUS, START_TIME, SUSPEND_TIME, SQL_TEXT, ERROR_NUMBER, ERROR_MSG
FROM dba_resumable;




Disable
--------
ALTER SYSTEM SET RESUMABLE_TIMEOUT=0 SCOPE=SPFILE;

Similarly, you can disable the feature by using:
 ALTER SESSION DISABLE TIMEOUT statement.













References:

Sybex 11g OCP(lesson & lab lessons)






Managing Oracle Secure Backup (OSB)
------------------------------------
RMAN interacts with SBT devices through an MML, or media manager. Oracle provides its own MML, in the form of Oracle Secure
Backup.
OSB can back up to a tape library directly, so you don't need any other media management layer. And, best of all, OSB is tightly integrated with the database engine and thus can be controlled and administered via Oracle Enterprise Manager.

OSB can also perform filesystem backups, just like any standalone tool. Clearly, eliminating the need for MMLs for RMAN
backups, combined with this ability to back up filesystems, provides a lower-cost, less-complex alternative for backup and recovery.


OSB Administrative Domain and Servers
-------
An administrative domain is collection of servers (hosts) that you manage as a single group for backup and restore operations. Within an OSB administrative domain, each server can be assigned one or more of the following roles:
-    Administrative (admin) server
-    Media server
-    Client server (or client host)

For each administrative domain, there is only one administrative server that controls the backup, restore, and scheduling operations. An administrative server manages one or more media servers and one or more clients.

A media server is a host that has at least one physical tape device connected to it. Media servers are responsible for transferring data to and from the attached tape devices.

The client host is the server where the Oracle databases and file-system data reside that you want to back up (and potentially restore). For a single-host environment, a server can be the admin server, media server, and client host in the OSB administrative domain.



OSB Interfaces
--------
There are four tools you can use to run and manage OSB backup and restore tasks:
-    Enterprise Manager database control and grid control
-    Oracle Secure Backup Web tool
-    Oracle Secure Backup command line tool (obtool)
-    Recover Manager command line tool (rman)



Installing OSB
------------------------
www.oracle.com/technetwork/database/secure-backup/downloads/

0. Download OSB version 10.3.0.3.0, and saved the zip file osb-10.3.0.3.0_linux32.zip under a staging directory /stage/osb of my admin server in the OSB administrative domain.

1. To perform the OSB installation, you must logon as root.
$ su - root
$ mkdir -p /usr/local/oracle/backup

2.
$ cd /usr/local/oracle/backup
# /stage/osb/osb-10.3.0.3.0_linux32/setup

Leaving the default parameters for now, press ENTER to choose the default answer.

You can install the software on this host in one of the following ways:
(a) administrative server, media server and client
(b) media server and client
(c) client


If you are not sure which option to choose, please refer to the Oracle Secure
Backup Installation Guide. (a,b or c) [a]?
You are going to install all three components of OSB on the same server, so again press ENTER to choose the default answer.
The OSB administrative server, media server, and client are now installed.

Un-install OSB
-------------
To remove the Oracle Secure Backup, as root run the uninstallob shell script located in the OSB_HOME/install directory.
The operating system variable OSB_HOME is commonly set to the /usr/local/oracle/backup directory.




Schedule Oracle Database and File System Data Backups
-----------------------------------
The OSB Web tool is used to configure the tape library and tape drives. Let’s add the oracle user and a Database Backup Storage Selector to enable backup of an Oracle database.
Connect and log into the OSB Web tool using the https:// link as the admin user. Go to the Configure page, click the Users link, click the Add button, and add the oracle user.
After the oracle user is added, click the Edit button, and change Preauthorized Access:
As a result, you will have the admin and oracle users:
Go to Configure: Hosts, and make sure that the server has the mediaserver role.
To add a storage selector, click the Database Backup Storage Selectors link at the bottom of the Configure page, click Add.

Oracle Enterprise Manager:
Now, let’s configure OEM for OSB usage. Connect to the database and go to the Availability tab in OEM. Click Backup Settings.
Click Configure to specify your OSB target, and on the Specify Oracle Secure Backup Target page, click the Add button and then enter the host:
Click Continue and enter the values shown here:
Name: OSB Server
Type:OSB
Home:/usr/local/oracle/backup
Hostname:
Monitor username:
Monitor password:

Click Return, and the OSB Server target is ready for backing up Oracle databases to tape.
The OSB administrative server is configured as an OEM target and can be managed by OEM.

It is not possible to perform Oracle database backup and restore using the OSB Web tool. Therefore, we recommend using OEM as a centralized interface to schedule backup and restore jobs for Oracle database and file system data.

Connect to the database, go to the Availability tab in OEM, and click Schedule Backup.




obtool
------
OSB provides a command line tool called obtool. You can invoke the command line version of the tool by typing:
$ obtool
which brings up the OSB prompt ob>. You can type "help" here to see the commands available.
ob > help
ob> help topics

logon to a specific user:
$ obtool -u apress_oracle
To verify the OSB user that you are logged in:
ob> id

To determine the rights of class admin, issue the lsclass command, as shown here:
ob> lsclass -l admin

In the following example, OSB user apress_oracle is created and assigned with oracle rights:
ob> mkuser --class oracle apress_oracle --preauth BLLNX3:oracle+cmdline+rman

The +cmdline attribute in the --preauth option grants the oracle OS user preauthorized access to
obtool utility, while the +rman attribute grants Oracle Database SBT backups via RMAN. If no +rman
preauthorization is defined for the server hosting the target database that you want to backup, then the
RMAN backup fails with ORA-19511 error.

If you want to view all OSB users:
ob> lsuser


mkuser:    To create an OSB user.
lsuser: To display information about OSB users.
renuser:To rename an OSB user.
chuser:    To modify the attributes of an OSB user.
rmuser: To delete an OSB user.

mkmf: To create a media family.
lsmf: To display information about media families.
renmf: To rename a media family.
chmf: To modify the attributes of a media family.
rmmf: To delete a media family.

mkssel: To create a database backup storage selector.
lsssel: To display information about database backup storage selectors.
renssel: To rename a database backup storage selector.
chssel: To modify the attributes of a database backup storage selector.
rmssel: To delete a database backup storage selector.




To create a time-managed media family, issue the mkmf command. In the following example, media family APRESS_OS has 7 days of write-period and 14 days of retention period. This means the volumes of media family APRESS_OS will expire and are ready for recycling after 21 days since the first backup piece is created on the tape volume:
ob> mkmf --writewindow 7days --retain 14days APRESS_OS

To create a content-managed media family, issue the mkmf, as shown here. Since the default volume expiration policy is content-managed, you can omit the --contentmanaged option.
ob> mkmf --contentmanaged APRESS_RMAN

In the following example, the database backup storage selector name is BLLNX3-DB11R2.ssel, which assigns the media family APRESS_RMAN for RMAN backups on Oracle database DB11R2 hosted on client server BLLNX3:
ob> mkssel --host BLLNX3 --dbname DB11R2 --family APRESS_RMAN BLLNX3-DB11R2.ssel

ob> exit|quit

Database Backup:
--------
The first option is to allocate an RMAN channel for SBT_TAPE inside the run{} block. In the following example, the media family APRESS_RMAN is passed as a parameter to the environment variable OB_MEDIA_FAMILY. The tape volumes will have a volume ID of APRESS_RMAN affixed with a six-digit sequence number generated by OSB.
RMAN> run {
allocate channel t1 type sbt_tape parms 'ENV=(OB_MEDIA_FAMILY=APRESS_RMAN)';
backup database;
}

You can also use the CONFIGURE command to set the RMAN channel for SBT_TAPE, as shown here:
RMAN> configure channel device type sbt_tape parms 'ENV=(OB_MEDIA_FAMILY=APRESS_RMAN)';
RMAN> backup device type sbt_tape database;

OB_MEDIA_FAMILY: To specify the media family that defines the characteristics of the tape volumes.
OB_DEVICE: To specify the tape drives to use during backup.
OB_RESOURCE_WAIT_TIME: To specify the wait time for resources to become available.
OB_ENCRYPTION: To specify the OSB encryption. If this is set, then OSB does not perform further encryption.
OB_RESTORE_DEVICE: To specify the tape drives to use during restore.

Database Restore
-----
For RMAN restore and recover, you have to allocate an RMAN channel for SBT_TAPE. In the following example, the RMAN channel for SBT_TAPE is allocated inside the run{} block:
RMAN> run {
allocate channel t1 type sbt_tape;
restore database;
recover database;
}

Another option is to run the CONFIGURE command. Unlike the previous example, you must include the PARMS clause in the CONFIGURE command (if you don't use the PARMS clause, a syntax error is returned):
RMAN> configure channel device type sbt_tape parms 'ENV=(OB_MEDIA_FAMILY=APRESS_RMAN)';
RMAN> restore device type sbt_tape database;
RMAN> recover device type sbt_tape database;

To make the RMAN repository (control file in this scenario) aware of backup pieces on tape, do the following:
1. Configure a channel for tape.
2. Make the RMAN repository aware of the backup piece via the CATALOG command.

Next, issue the CATALOG DEVICE TYPE SBT_TAPE BACKUPPIECE command, followed by the name of the backup piece. In this example, one backup piece is cataloged:
RMAN> catalog device type sbt_tape backuppiece 'silr06fk_1_1';

If you have multiple backup pieces that you want to catalog, you must issue the CATALOG DEVICE TYPE SBT_TAPE BACKUPPIECE command for each individual backup piece. The key in cataloging the RMAN backup pieces on tape is you must know the exact names of the backup pieces.

Note: For disk-based backups, you can easily make the RMAN repository aware of multiple backup pieces via the CATALOG START WITH command. However, this technique only works for disk-based backups and not tape backups.


File System Backup
------
catds To display the contents of a dataset file.
cdds To change the dataset directory.
chkds To check the syntax in a dataset file.
edds To modify a dataset file.
lsds To list the dataset file and dataset directory names.
mkds To create a dataset file or dataset directory.
pwdds To show the current directory in the dataset directory tree.
rends To rename a dataset file or dataset directory.
rmds To delete a dataset file or dataset directory.

Like any backup job, you need to define the three Ws and one H. What data to backup? When to run the
backup? Where to store the backup? How the backup runs? To accomplish these 3 Ws and 1 H using
OSB, you need to create a dataset file, as well configure a backup window, a schedule, and a trigger.

1. Creating Dataset Files:
Dataset files define the directories and files on the client hosts that you want to backup. In the following example, this dataset file specifies to include all files under the directory /home/oracle on the client hosts BLLNX1 except for files under the directory /home/oracle/temp and Oracle database-related files:
include host BLLNX1
include path /home/oracle {
exclude path /home/oracle/temp
exclude oracle database files
}

To create a dataset file named bllnx1_home-oracle.ds, perform the following steps using the obtool utility, as shown here:
a. Issue the mkds command and followed by the name of the dataset file.
ob> mkds bllnx1_home-oracle.ds
b. The vi editor is invoked, and a dataset template is displayed.
c. Comment out or remove all existing lines, and add the following lines:
include host BLLNX1
include path /home/oracle {
exclude oracle database files
exclude path /home/oracle/temp
}
d. Save the file by typing Esc key, : and wq!
e. The following prompt is displayed. Press the Enter key to confirm the changes.
Apply your changes, if any [yes]?
You just created a dataset file.

2. Configuring Backup Windows:
The backup window defines the range of time the scheduled backups are allowed to run. The default backup window is daily 00:00 to 24:00. If there is no backup window defined, the scheduled backups are not going to run at all.
For production servers, you may want the backups to run daily only between 1 a.m. and 5 a.m. when there is minimal database traffic. To define a range of time for your backup window, perform the following steps:
a. Remove the existing daily 00:00-24:00 backup window, since it overlaps the backup window you want to create by issuing the rmbw command:
ob> rmbw daily
2. Issue the lsbw command.
ob> lsbw

There are no backup windows.
This result indicates that the daily backup window is already removed.
3. To create the backup window, issue the addbw command. The following example defines the backup window daily from 01:00 to 0500:
ob> addbw --times '01:00-05:00' ,mon,tue,wed,thu,fri,sat,sun

addbw To add a backup window.
chkbw To check whether there is a backup window defined.
lsbw To list the backup windows.
rmbw To remove a backup window.
setbw To modify a backup window.


3. Configuring Backup Schedules and Triggers:
The backup schedule defines what data to backup, where to store the backup, and how the backup runs, while the triggers define when a backup is scheduled to run. For what data to backup, set the specific datasets. For where to store the backup, set the specific tape drives. If no specific tape drive is selected, then any available tape drive will be used. For how the backup runs, set the job priority and backup encryption options. The lower the value of the job priority number, the greater preference is assigned to the job by the scheduler. The default value of the job priority is 100 and no for the encryption option.

The following example uses the MKSCHED command to create a backup schedule named bllnx1_homeoracle.sched for dataset bllnx1_home-oracle.ds:
ob> mksched --dataset bllnx1_home-oracle.ds bllnx1_home-oracle.sched

After creating the backup schedule, create a trigger to define when to run the scheduled backup. The following example uses the CHSCHED command to define a trigger for schedule bllnx1_home-oracle.sched to run daily at 02:00 (2 a.m.). The -a option means the addtrigger, the –d option means day, the –t option means time, and the –f means media family.
ob> chsched -a -d daily -t 02:00 -f APRESS_OS bllnx1_home-oracle.sched

To display the information about the backup schedules:
ob> lssched -l bllnx1_home-oracle.sched

To remove a backup schedule, issue the rmsched command, as shown here:
ob> rmsched bllnx1_home-oracle.sched




Performing On-Demand File-System Backups
----------------------------------
To run a one-time backup on a specific client host use the backup command. In the following example, the dataset file is bllnx1_home-oracle.ds, which was created earlier:
ob> backup --dataset bllnx1_home-oracle.ds --go

If you omit the --go option, the backup request is still in the queue. You can issue the lsbackup command to display the backup requests that are queued, as shown here:
ob> lsbackup

To forward the backup request to the OSB scheduler, issue the following backup command with the GO option, as shown here:
ob> backup --go

To manually back up the OSB catalog of the admin server, issue the following backup command:
ob> backup --level 0 --dataset OSB-CATALOG-DS --go




File System Restore
---------------------
There are three options on how you can restore from a file-system backup:
• catalog-based restore
• raw restore
• obtar command

In a catalog-based restore, you provide the directory and file, which you can browse from the OSB catalog. In both the raw restore and the obtar command, you provide the volume ID and file number. The volume ID is the unique name (that contains the media family) assigned to a tape volume, and the file number is the number of the backup image on the tape volume. Both the catalog-based restore and raw restore are performed using the obtool utility, while the obtar commands are issued at the operating system prompt.

1. Performing Catalog-Based Restore:
------
For a catalog-based restore, you can browse the OSB catalog to determine and verify the files you want to restore. In the following example, you want to restore the file /home/oracle/scripts/rmanbkup.sh of the client host BLLNX1. To restore the files using the OSB catalog, perform the following steps:
a. Set the host variable to BLLNX1, which is the source host.
ob> set host BLLNX1
b. Issue the cd command to navigate to the directory.
ob> cd /home/oracle/scripts
c. Issue the ls command to verify the files in the directory.
ob> ls -l rmanbkup.sh
-rwxr-xr-x oracle.oinstall 782 2010/01/13.18:14 rmanbkup.sh (0)

d. Issue the restore command.
ob> restore '/home/oracle/scripts/rmanbkup.sh' --go


2. Performing a Raw Restore
--------
To restore a data using the raw restore, you must know the volume ID and file number where to extract the data from. However, you know that the backups are using a particular media family, which can make the search a bit easier.

In the following example, you are going to restore the file /home/oracle/scripts/rmanbkup.sh from a tape volume that has media family APRESS_OS.
a. Issue the lsvol command with --contents option to display the contents of the volumes associated to media family APRESS_OS:
ob> lsvol --contents --family APRESS_OS --nobarcode
VOID OOID Seq Volume ID Family Created Attributes
2845 2845 1 APRESS_OS-000001 APRESS_OS 10/21.04:41 open; closes 10/28.04:41
BSOID File Sect Level Host Size Created Attributes
31250 1 1 0 BLLNX1 208.6 MB 10/21.04:41
31258 2 1 0 BLLNX1 208.8 MB 10/21.06:04
31260 3 1 0 BLLNX1 210.1 MB 10/21.12:23
31261 4 1 0 BLLNX1 210.1 MB 10/21.12:34

b. According to the output shown above, you want to restore from a backup taken on 10/21.06:04 (i.e. October 21 at 6:04am). The corresponding file number is 2, and the volume ID is APRESS_OS-000001. Issue the following restore command, as shown here. The -R option indicates a raw restore operation and does not use an OSB catalog, while the –F option means the filenumber, and the –v option means the volume ID.
ob> restore -R -F 2 -v APRESS_OS-000001 /home/oracle/scripts/rmanbkup.sh --go

3. Performing an obtar Restore
---------
When you run the OSB restores, they are actually translated to obtar commands in the background. The obtar commands are issued at the operating system level. The obtar command is seldom used to perform restore operations, since you need to provide the volume ID and file number, which is not readily available especially if you have no access to the OSB catalog.
In the following example, you are going to restore the file /home/oracle/scripts/rmanbkup.sh from the tape that has volume ID APRESS_OS-000001 at file number 2.
a. Issue the loadvol command to manually load the tape that has volume ID APRESS_OS-000001 to an available tape drive. Here I used tape drive vdrive8.
ob> loadvol -D vdrive8 --volume APRESS_OS-000001

b. To check the contents of the tape, issue the obtar command with -t option.
$ obtar -t -f vdrive8

c. To perform the restore operation using the obtar command, issue the following command with the -x option. The -F option means the file number, and the -k option avoids overwriting the existing file.
$ obtar -F 2 -x -f vdrive8 -k /home/oracle/scripts/rmanbkup.sh

d. Since you restore using the obtar command, which is executed at the operating system level, you can’t monitor the restore job using the obtool utility. However, one way to verify whether the file is restored is to issue the ls OS command.
$ ls -l /home/oracle/scripts/rmanbkup.sh



OSB Job Monitoring
------------------
Listing Jobs:
To display the jobs that are still running, issue lsjob command with the --active or -a option. For other job states, use --complete or -c option for completed jobs, --pending or -p for pending jobs, --inputrequest or -i for jobs currently requesting input, and --all or -A to display all jobs regardless of the job state. For example, to list active jobs:
ob> lsjob --active
ob> lsjob -a -l -j -o -R -L -C

For RMAN jobs, use the --dbname or -d option to limit the output for a specific Oracle database. If you know the database ID, you can use the --dbid or -I option instead. This example uses the dbname parameter:
ob> lsjob --active --dbname DB11R2


For file-system jobs, use the --host or -h option to limit the output for a specific client host, as shown here:
ob> lsjob --active --host BLLNX1



Showing Job Transcripts:
ob> lsjob --active --subjobs




Using the Oracle Backup Web Interface Tool
------------------------------------------
$ ps –ef | grep observice
https:/localhost
username=admin
password=leave password blank

You can perform four major activities using the Oracle Backup Web Interface: configure, manage, backup, and restore operations.

Expert users can use the obtar command-line tool to work directly with tape drives, bypassing the
Oracle Backup’s scheduler.




Virtual Test Devices
----------------------
You may relate to the expression “experience is the best teacher.” You have read this chapter, and you want to try the OSB commands and examples yourself. However, you don’t have a physical tape device connected to any test servers in your environment. In this situation, where you are going to install and experiment with OSB? Well, you can configure a virtual test device solely for testing purposes.

To configure a virtual test device, perform the following steps. In this example, the hostname of the media server is BLLNX3.
1. Logon as the oracle OS user at your designated media server of the OSB administrative domain.
$ su - oracle
2. At the OSB media server, create a directory that hosts the virtual storage elements and virtual tape devices. Make sure you have enough disk space on the mount point where you are going to create the directory.
$ mkdir /osb_vdevices
3. Logon as admin user to the obtool utility, and provide the corresponding password.
$ obtool -u admin
Password:

4. Configure the virtual tape library by running the following mkdev command using the -t option to specify the device as a tape library and the -v option to specify as a virtual tape library. The -S option specifies the number of storage elements. The -I option specifies the number of export and import elements. The -o option specifies that the tape device is logically available to OSB. The -B option with the yes value specifies that the barcode reader is present. The -a option attaches the virtual library vlib1 to directory /osb_vdevices/vlib1 of
host BLLNX3.

ob> mkdev -t library -v -S20 -I2 -o -B yes -a BLLNX3:/osb_vdevices/vlib1 vlib1

5. Configure the virtual tape devices by running the following mkdev command using the -t option to specify the device as a tape device and the -v option to specify as a virtual tape device. The -o option specifies that the tape device is logically available to OSB. The -l option specifies the name of the associated tape library. The -d option specifies the data transfer element (DTE). The -a option attaches the virtual tape vdrive1 to directory /osb_vdevices/vdrive1 of host BLLNX3. To configure additional virtual tape devices, run the same mkdev
command. However, for the second virtual tape device, change the name vdrive1 and directory /osb_vdevices/vdrive1 to vdrive2 and directory
/osb_vdevices/vdrive2, respectively, and so on.
ob> mkdev -t tape -v -o -l vlib1 -d1 -a BLLNX3:/osb_vdevices/vdrive1 vdrive1

6. Run the insertvol command to manually insert a volume to the tape library. The -L option specifies the library name, which is vlib1. Since there are 20 storage elements defined when the virtual library is created, then issue unlabeled 1-20 to insert 20 new volumes.
ob> insertvol -L vlib1 unlabeled 1-20

7. Issue the lsvol command to display the volumes in the tape library vlib1.
ob> lsvol -l -L vlib1




OSB Software Upgrades
---------------------
You have an older version of the OSB installed and you want to upgrade to the current release. To
perform an OSB upgrade, perform steps similar to installing OSB. However, before performing the OSB
upgrade, make sure to backup the OSB catalog in the admin server and stop the OSB daemons and
services at all hosts in the administrative domain. Also, upgrade the admin server first, then the media
servers and client hosts.

OSB Service Shutdown and Startup Commands:
-------
Linux
 /etc/init.d/observiced stop|start
Solaris
/etc/init.d/OracleBackup stop|start
AIX
/etc/rc.d/init.d/OracleBackup stop|start
HP-UX
/sbin/init.d/OracleBackup stop|start
Windows
net stop|start observiced




References;


Configure using Oracle Enterprise Manager (OEM), RMAN Backups to tape via Oracle Secure Backup (OSB) (Doc ID 1460052.1)
How To Configure RMAN Backups To Tape via Oracle Enterprise Manager (Doc ID 735953.1)

www.oracle.com/database/secure-backup.html
http://docs.oracle.com/cd/B19306_01/install.102/b14235/toc.htm (Oracle Secure Backup Installation Guide Release 10.1)
http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/11g/r1/prod/ha/osb10_2install/osb1.htm(Installing and Configuring Oracle Secure Backup 10.2)
http://docs.oracle.com/cd/E14812_01/doc/index.htm (Oracle Secure Backup Installation and Configuration Guide Release 10.3)