Friday, 27 May 2016

Printers setup in Oracle Applications


Prerequisites:

Printers should be first defined and enabled at Unix OS level (OS admin will do it) .
Command to check printer status at UNIX level is as below.
lpstat -a

Configure Printer in Oracle Applications:

The following is the order of Printer setup in Oracle Applications:
1. Define Driver
2. Define Style
3. Define Printer Type
4. Register the Printer

·         Define Driver:

Login to application using System Administrator
Define the printer driver, Install à Printer à Driver
Please provide the below values (depending upon your configuration) to set up.
Enter the Driver Name:
Enter the User Driver Name:
Description: Give some appropriate description

·         Define Style:

Login to application using System Administrator
Define the printer style, Install à Printer àStyle
Please provide the below values (depending upon your configuration) to set up.
 Style Name:
 Seq:
User Style:
·         Define Printer Type:

Login to application using System Administrator
Define the printer type, Install à Printer à Type
Please provide the below values (depending upon your configuration) to set up.
Type:
Style: Above Style name
Driver Name: Above Driver name


·         Register the Printer:

Login to application using System Administrator
Register a new printer, Navigation Path is Install à Printer àRegister
Please provide the below values (depending upon your configuration) to set up.
Printer: Printer Name
Type: Type defined earlier
Description: Useful description

Some important tables related with printers are:

FND_PRINTER
FND_PRINTER_INFORMATION
FND_PRINTER_STYLES
FND_PRINTER_DRIVERS


NOTE: If any changes on the printer setups should be bounce concurrent managers for the changes to take effect. 

Monitoring Oracle Streams

Monitoring Oracle Streams is an important task in replication environment. As discussed in my other posts, Oracle Streams environment consists basically of Capture Process, Propagation, Apply Process and Queues. Thus breaking down into these components and looking at their statistics, performance or errors can help in saving lot of efforts in long term.

First of all, information about Streams environment can be obtained with help of DBMS_STREAMS_ADVISOR_ADM package. This can gather information about the Oracle Streams topology and performance. To view this information below views can be queried:

Tracking LCRs:

1. Connect as Stream Administrator

2. Begin message tracking

BEGIN
  DBMS_STREAMS_ADM.SET_MESSAGE_TRACKING(
    tracking_label => 'TRACK_LCRS',
    actions        => DBMS_STREAMS_ADM.ACTION_MEMORY);
END;
/

Confirm using:

SET SERVEROUTPUT ON SIZE 4000
DECLARE
  tracking_label  VARCHAR2(4000);
BEGIN
  tracking_label := DBMS_STREAMS_ADM.GET_MESSAGE_TRACKING();
  DBMS_OUTPUT.PUT_LINE('Tracking Label: ' || tracking_label);
END;
/

3. Using below query, all LCRs with label TRACK_LCRS can be queries:

COLUMN COMPONENT_NAME HEADING 'Component|Name' FORMAT A10
COLUMN COMPONENT_TYPE HEADING 'Component|Type' FORMAT A12
COLUMN ACTION HEADING 'Action' FORMAT A11
COLUMN SOURCE_DATABASE_NAME HEADING 'Source|Database' FORMAT A10
COLUMN OBJECT_OWNER HEADING 'Object|Owner' FORMAT A6
COLUMN OBJECT_NAME HEADING 'Object|Name' FORMAT A10
COLUMN COMMAND_TYPE HEADING 'Command|Type' FORMAT A7
 
SELECT COMPONENT_NAME,
       COMPONENT_TYPE,
       ACTION,
       SOURCE_DATABASE_NAME,
       OBJECT_OWNER,
       OBJECT_NAME,
       COMMAND_TYPE
   FROM V$STREAMS_MESSAGE_TRACKING
   WHERE TRACKING_LABEL='TRACK_LCRS';


4. Stop message tracking

BEGIN
  DBMS_STREAMS_ADM.SET_MESSAGE_TRACKING(
    tracking_label => NULL,
    actions        => DBMS_STREAMS_ADM.ACTION_MEMORY);
END;
/

Useful Queries:


-- Display ANYDATA Queue
COLUMN OWNER HEADING 'Owner' FORMAT A10
COLUMN NAME HEADING 'Queue Name' FORMAT A28
COLUMN QUEUE_TABLE HEADING 'Queue Table' FORMAT A22
COLUMN USER_COMMENT HEADING 'Comment' FORMAT A15

SELECT q.OWNER, q.NAME, t.QUEUE_TABLE, q.USER_COMMENT
  FROM DBA_QUEUES q, DBA_QUEUE_TABLES t
  WHERE t.OBJECT_TYPE = 'SYS.ANYDATA' AND
        q.QUEUE_TABLE = t.QUEUE_TABLE AND
        q.OWNER       = t.OWNER;
       
       
-- Number of messages in each buffered queue

COLUMN QUEUE_SCHEMA HEADING 'Queue Owner' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A15
COLUMN MEM_MSG HEADING 'Messages|in Memory' FORMAT 99999999
COLUMN SPILL_MSGS HEADING 'Messages|Spilled' FORMAT 99999999
COLUMN NUM_MSGS HEADING 'Total Messages|in Buffered Queue' FORMAT 99999999

SELECT QUEUE_SCHEMA,
       QUEUE_NAME,
       (NUM_MSGS - SPILL_MSGS) MEM_MSG,
       SPILL_MSGS,
       NUM_MSGS
  FROM V$BUFFERED_QUEUES;
 
-- View CAPTURE processes
COLUMN SENDER_NAME HEADING 'Capture|Process' FORMAT A13
COLUMN SENDER_ADDRESS HEADING 'Sender Queue' FORMAT A27
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A15
COLUMN CNUM_MSGS HEADING 'Number|of LCRs|Enqueued' FORMAT 99999999
COLUMN LAST_ENQUEUED_MSG HEADING 'Last|Enqueued|LCR' FORMAT 9999999

SELECT SENDER_NAME,
       SENDER_ADDRESS,
       QUEUE_NAME,       
       CNUM_MSGS,
       LAST_ENQUEUED_MSG
  FROM V$BUFFERED_PUBLISHERS;


-- Performance Statistics of Propagations

COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A13
COLUMN DBLINK HEADING 'Database|Link' FORMAT A9
COLUMN ELAPSED_DEQUEUE_TIME HEADING 'Dequeue|Time' FORMAT 99999999.99
COLUMN ELAPSED_PICKLE_TIME HEADING 'Pickle|Time' FORMAT 99999999.99
COLUMN ELAPSED_PROPAGATION_TIME HEADING 'Propagation|Time' FORMAT 99999999.99

SELECT p.PROPAGATION_NAME,
       s.QUEUE_NAME,
       s.DBLINK,
       (s.ELAPSED_DEQUEUE_TIME / 100) ELAPSED_DEQUEUE_TIME,
       (s.ELAPSED_PICKLE_TIME / 100) ELAPSED_PICKLE_TIME,
       (s.ELAPSED_PROPAGATION_TIME / 100) ELAPSED_PROPAGATION_TIME
  FROM DBA_PROPAGATION p, V$PROPAGATION_SENDER s
  WHERE p.SOURCE_QUEUE_OWNER      = s.QUEUE_SCHEMA AND
        p.SOURCE_QUEUE_NAME       = s.QUEUE_NAME AND
        p.DESTINATION_QUEUE_OWNER = s.DST_QUEUE_SCHEMA AND
        p.DESTINATION_QUEUE_NAME  = s.DST_QUEUE_NAME;


-- Apply Errors
SET PAGES 1000 LINES 150
COLUMN APPLY_NAME HEADING 'Apply|Process|Name' FORMAT A10
COLUMN SOURCE_DB HEADING 'Source|Database' FORMAT A10
COLUMN LOCAL_TRANSACTION_ID HEADING 'Local|Transaction|ID' FORMAT A11
COLUMN ERROR_NUMBER HEADING 'Error Number' FORMAT 99999999
COLUMN ERROR_MESSAGE HEADING 'Error Message' FORMAT A50
COLUMN MESSAGE_COUNT HEADING 'Messages in|Error|Transaction' FORMAT 99999999

SELECT APPLY_NAME,
       substr(SOURCE_DATABASE,1,6) SOURCE_DB,
       LOCAL_TRANSACTION_ID,
       ERROR_NUMBER,
       ERROR_MESSAGE,
       MESSAGE_COUNT
FROM DBA_APPLY_ERROR;


 

Friday, 20 May 2016

Setup Oracle Streams - Datapump Instantiation (Part 2)

Hi everyone, this post is in continuance with Oracle Streams - Datapump Instantiation (Part 1), please refer to that beforehand to get a clear picture on prerequisites. This post will discuss about setting up of Streams replication between to servers. To begin with, we have below things ready:

  • A PROD.EXAMPLE.COM (Source) database and REPL.EXAMPLE.COM (Destination) database
  • Oracle NET configured between these two and both are in ARCHIVELOG mode
  • STREAMADM user is setup with default tablespace STREAM_TBS and Stream Administrator privileges on both source and destination databases respectively
  • A database link has been created under STREAMADM user on source/destination database to access STREAMADM user on destination/source database

With above done, we are ready to create Streams related configurations in database.

Create ANYDATA queue 

An ANYDATA queue stores messages whose payloads are of ANYDATA type. Therefore, an ANYDATA queue can store a message with a payload of nearly any type, if the payload is wrapped in an ANYDATA wrapper.

Create ANYDATA queue on source and destination databases by executing below command:


EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE();

Create CAPTURE Process/Rules


A Capture process will capture changes happening in database/schema/tables depending upon rules setup for it. The procedure used for setting up rules automatically creates a CAPTURE process if it does not exists.

BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
   schema_name          => '<SCHEMA_NAME>',
   streams_type         => 'capture',
   streams_name         => 'CAPTURE_PROD',
   queue_name           => 'streamadm.streams_queue',
   include_dml          => TRUE,
   include_ddl          => TRUE,
   include_tagged_lcr   => FALSE,
   source_database      => 'PROD.EXAMPLE.COM',
   inclusion_rule       => TRUE,
   and_condition        => NULL);
END;
/

  • streams_name parameter specifies name of the capture process, this if does not exists will be created in first execution. I named it CAPTURE_PROD
  • include_dml and include_ddl parameters are set to TRUE since I want to capture both of these changes
  • source_database must be provided with global database name of source
Run above procedure N number of times for N number of schemas with correct SCHEMA_NAME supplied.

Prepare Schemas for Instantiation


The procedure below enables supplemental logging for all columns in the tables in the schema being prepared for instantiation and for any table added to this schema in the future. The columns are logged unconditionally. This will ensure that all the changes to tables under schema are captured by capture process.



BEGIN
  DBMS_CAPTURE_ADM.PREPARE_SCHEMA_INSTANTIATION(
    schema_name          => '<SCHEMA_NAME>',
    supplemental_logging => 'all'); 
END;  
/


Run above procedure N number of times for N number of schemas with correct SCHEMA_NAME supplied.

Take Datapump Export

Datapump export will create a dumpfile for all those selected schemas with are to be replicated from source to destination. As also discussed earlier, the objects to be replicated must exist on both databases, thus datapump will ensure to do this for us.

1. Find out current SCN number of database

SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM DUAL; 
eg: 6069217041919



2. Take Datapump Export


expdp streamadm/streamadm DUMPFILE=exp_meta_user.dmp LOGFILE=exp_meta_user.log DIRECTORY=STREAM_SRC CONTENT=METADATA_ONLY FULL=y INCLUDE=ROLE,PROFILE


expdp streamadm/streamadm SCHEMAS=SCHEMA_NAMES_LIST DIRECTORY=STREAM_SRC DUMPFILE=exp_schemas.dmp FLASHBACK_SCN=6069217041919

Taking ROLE,PROFILE export is suggested as in my case import at destination failed due to absence of these objects. FLASHBACK_SCN parameter is used since the schema(s) may contain foreign key constraints. After we have obtained the SCN, it must be ensured that no DDL changes occur in database. SCHEMA_NAMES_LIST should be comma separated schema names.

Create Required Tablespaces

Permanent tablespaces associated with schemas that are exported from source must exist at destination database for import to be successful. The required tablespace names and their DDLs can be obtained by using query:

SET PAGES 10000 LINES 150 LONG 99999999;

SELECT DISTINCT 'SELECT DBMS_METADATA.GET_DDL(''TABLESPACE'','''
  || TABLESPACE_NAME || ''') FROM DUAL;'

FROM DBA_SEGMENTS
WHERE OWNER IN ('SCHEMA_NAMES_LIST');

SELECT DISTINCT 'SELECT DBMS_METADATA.GET_DDL(''TABLESPACE'','''
  || TEMPORARY_TABLESPACE || ''') FROM DUAL;'

FROM DBA_USERS
WHERE USERNAME IN ('
SCHEMA_NAMES_LIST');

After obtaining DDL statements of these tablespaces re-construct them based on database filesystem of destination database. Copy OR spool the statements in a .sql file and create these tablespaces at destination database.

Import Schemas into Destination Database

Copy exported dumpfile exp_meta_user.dmp and exp_schemas.dmp from source server to destination server under /tmp/dest_dp directory which is path for directory object STREAM_DEST and initiate import.


impdp streamadm/streamadm DUMPFILE=exp_meta_user.dmp LOGFILE=imp_meta_user.log DIRECTORY=STREAM_DBA

impdp streamadm/streamadm SCHEMAS=SCHEMA_NAMES_LIST DIRECTORY=STREAM_DEST DUMPFILE=exp_schemas.dmp LOGFILE=imp_schemas.log

Instantiate Objects at Destination

Instantiating objects at destination database will set the SCN for schemas from where the changes will be applied at destination. Below procedure would be used for this operation:

DECLARE
  iscn  NUMBER;         -- Variable to hold instantiation SCN value
BEGIN
  iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
  DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN@REPL.EXAMPLE.COM(
    source_schema_name    => 'SCHEMA_NAME',
    source_database_name  => 'PROD.EXAMPLE.COM',
    instantiation_scn     => iscn,
    recursive             => TRUE);
END;
/


Run above procedure N number of times for N number of schemas with correct SCHEMA_NAME supplied.

Create APPLY Process/Rules

Apply process is created at destination database, this process will apply changes sent by capture process from source database. The procedure used to setup APPLY rules automatically creates an Apply process if it does not exists.

BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
   schema_name          => 'SCHEMA_NAME',
   streams_type         => 'apply',
   streams_name         => 'APPLY_RPL',
   queue_name           => 'streamadm.streams_queue',
   include_dml          => TRUE,
   include_ddl          => TRUE,
   include_tagged_lcr   => FALSE,
   source_database      => 'PROD.EXAMPLE.COM',
   inclusion_rule       => TRUE,
   and_condition        => NULL);
END;
/


In above procedure, replace SCHEMA_NAME with name of schema to be replicated. Execute the procedure N number of times for N number of schemas with correct schema_name supplied.

  • streams_name parameter specifies name of the apply process, this if does not exists will be created in first execution. I named it APPLY_RPL
  • include_dml and include_ddl parameters are set to TRUE since I want to apply both of these changes
  • source_database must be provided with global database name of source
Run above procedure N number of times for N number of schemas with correct SCHEMA_NAME supplied.

Create Propagation at Source

We have now Capture process ready to capture change, an Apply process waiting to apply changes and an anydata queue to hold changes. The propagation will setup mechanism to ship changes captured into queue at source database to queue at destination database.

A Propagation can created automatically when propagation rules are added for the first time using below procedure:

BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
   schema_name              => 'SCHEMA_NAME',
   streams_name             => 'PROD_TO_RPL',
   source_queue_name        => 'streamadm.streams_queue',
   destination_queue_name   => 'streamadm.streams_queue@RPL.EXAMPLE.COM',
   include_dml              => TRUE,
   include_ddl              => TRUE,
   include_tagged_lcr       => FALSE,
   source_database          => 'PROD.EXAMPLE.COM',
   inclusion_rule           => TRUE,
   and_condition            => NULL,
   queue_to_queue           => TRUE);
END;
/


Run above procedure N number of times for N number of schemas with correct SCHEMA_NAME supplied.

Start Apply Process

At destination database run below procedures to start APPLY process:


BEGIN
DBMS_APPLY_ADM.SET_PARAMETER(

    apply_name  => 'APPLY_RPL',

    parameter   => 'disable_on_error',

    value       => 'N');
END;
 /

BEGIN
DBMS_APPLY_ADM.START_APPLY(

    apply_name  => 'APPLY_RPL');
END; 
/

Start Capture Process

At source database run below procedure to start CAPTURE process:



BEGIN
   DBMS_CAPTURE_ADM.START_CAPTURE(
      capture_name  => 'CAPTURE_DBA');
END;
/

All Done! Now test replication by creating a table, inserting a few values and dropping it.