Tuesday, December 18, 2012

Oracle Applications Database: Migration from Solaris to Linux

Here I list the steps we followed when migrating from Solaris Oracle Applications databases to Linux.

 The other important changes that were introduced as part of this (apart form platform migration) were
1. 11.2.02 version of Grid Infrastructure (in case of RAC) and Oracle Restart Installations in case on  non-rac  databases was installed on Linux. On Solaris, standalone 11.1.0.7.6  ASM installations existed.

2. Moving from multiple ASM instances per box (This is not supported configuration but this is how it was setup originally on Solaris. Each database on the same box had its own ASM instance. This is not supported but it worked. The issue we faced though was during upgrades - if you had applied PSU on ASM and upgraded one of the databases that it supported by this database also the same PSU, then the other database which was not upgraded could not be brought up (Example , ASM is at PSU 6 and Database D1 is at PSU 6 but Database D2 using the same ASM instance (another diskgroup) was a lower PSU level cannot be brought up). The advantage in this model was that if ASM instance crashed, only the supporting database had to be restarted. But for support reasons, we decided to move away from that model (plus with GI install, starting 11.2.0.2 this was not possible to install this way as well).

3. RDBMS version was maintained at 11.1.0.7. We did not want to upgrade the RDBMS to 11.2.0.2 at this time because of the fear of introducing more variables (and difficulty in testing - whether an issue was caused by Linux migration or Database upgrade).

4. Since the Linux boxes were beefier compared to Solaris boxes, we added more SGA (10-15% more)

As far as the actual process is concerned, we followed the standard Oracle note on migration of Oracle Applications  362205.1 (with minor modifications, listed below).

Tasks

1. Grid Installation and RDBMS installation on Linux boxes.
2. Create the export parameter file
    Sample file:
  directory=linux_mig
dumpfile=aexp%U.dmp
filesize=8388608000
parallel=16
full=y
exclude=SCHEMA:"='MDDATA'"
exclude=SCHEMA:"='OLAPSYS'"
exclude=SCHEMA:"='ORDSYS'"
exclude=SCHEMA:"='DMSYS'"
exclude=SCHEMA:"='OUTLN'"
exclude=SCHEMA:"='ORDPLUGINS'"
exclude=STATISTICS
job_name='linux_migration'
#transform=oid:n
QUERY=applsys.wf_item_attribute_values:"where item_type!='WFERROR'and name != 'EVENT_MESSAGE'"
logfile=expdp_for_of1uat.log

3. Apply patch 7120092 to the source administration server node
4. Record Advanced Queue settings
          sqlplus /nolog 
          connect / as sysdba; 
          @auque1.sql 
         This will generate auque2.sql5. Create the target database instance
         Modified the script aucrdb.sql to suit to our requirements (change file names, storage parameters)
       Recreate control files with max instances = 3 
       sqlplus '/ as sysdba' 
      startup nomount 
     @aucrdb_OF1PRD.sql 
6. Set up the SYS schema 
     sqlplus '/ as sysdba' 
      @audb1110.sql
7. Set up the SYSTEM schema
         sqlplus system/xxxxx 
          @ausy1110.sql
8. Install Java Virtual Machine
         sqlplus system/xx 
        @aujv1110.sql
9. Install other required components
          sqlplus system/xx 
        @aumsc1110.sql FALSE SYSAUX TEMP
10. Disable automatic gathering of statistics
        sqlplus "/ as sysdba" 
      alter system enable restricted session; 
      @adstats.sql 
      sqlplus "/ as sysdba" 
      alter system disable restricted session; 
11. We have SAN/NAS for ASM storage. Since we were migrating to Linux, the diskgroups were recreated on the new box once storage was available.

Once the down time begins the following steps were done.

1. set cluster_database = false and bring up the instance on node 1(we have a 2 node RAC)
2. Grant privilege to source system schema --  AS SYS, grant EXEMPT ACCESS POLICY to system;This is because we had VPD on our Apps
3.  Create directory for export datapump
      AS SYS, create directory linux_mig as '/linux_migration/OF1PRD'; 
              grant read,write on directory linux_mig to public;
4. Purge Unused columns
            select * from sys.dba_unused_col_tabs; 

           set head off pages 0 feedback off verify off echo off 
           spool dropUnusedCols.sql 
               select 'alter table ' || owner || '.' || table_name || ' drop unused columns;' 
               from sys.dba_unused_col_tabs; 
                spool off 
                Clean dropUsedCols.sql and run as sysdba. Ignore these errors 
                ERROR at line 1: 
               ORA-12996: cannot drop system-generated virtual column
5.   Purge recycle bins
        as sysdba 
            PURGE DBA_RECYCLEBIN; 
             select owner, count(*) from dba_recyclebin group by owner;

6. This was not documented in note 362205.1. But there was a bug with datapump export which truncated very big packages. I did not track this but basically large XLA packages were truncated upon export. Solution was to drop them and recreate them after the datapump import.
genDropXLA.sql 
        set pages 0 head off feedback off lines 170 echo off trim on verify off termout off timing on 
        spool dropXLA_AAD_Pkgs.sql 
        select distinct('drop package ' || owner || '.' || object_name || ';') 
        from dba_objects 
        where object_type='PACKAGE BODY' and object_name like 'XLA_00%AAD%PKG'; 
        spool off 
        set timing on 
      @dropXLA_AAD_Pkgs 

7. Export the applications database instance
expdp system/[system password] parfile=auexpdp.dat
 
Our database size was 2 TB and it took roughly 4 hrs to do the export.

Import the Applications database instance 
The import process was where we did things differently for index creation. In our first environment when we went through the standard process, the import took more than 2 days and was not complete because the index creation was in serial order. There were some notes on Metalink on how to speed up the index creation but they were not very useful in reducing the time. On googling, I found Randolf Geist has written a script for parallel object creation for import datapump (http://oracle-randolf.blogspot.com/2011/02/concurrent-index-creation.html) and I had used the script and modified it to our requirements to create indexes in parallel. This was the entire import process completed in less than 10 hrs as opposed to the original 2+ days and not running to completion. 
There was another issue that we ran (probably because of our patch level and patches and was never fixed by Oracle at that time). Import datapump would crash after running for about 2 hours and workaround was to restart it. We had given all the information to Oracle in the SR that was raised but no patch was given. So we just monitored the import closely and restarted it when it failed. This would happen only once during the entire import process and in our case it was roughly when it was 2 hrs after the start of the import.



1. Create the import parameter files

All files will be in LINUX_MIGRATION_OF1UAT. 
create or replace directory linux_mig as '/linux_migration/expdp/'; 
       grant read, write on directory linux_mig to public; 
impdp sample par 
=============== 
        directory=linux_mig 
        dumpfile=aexp%U.dmp 
        parallel=24 
        full=y 
                    transform=oid:n 
        job_name='lin_mig_imp' 
        logfile=impdp_OF1UAT.log 

2. Import the Applications database instance - Phase I


Run import and monitor.When the impdp starts creating indexes and go to next step. 
Note:The impdp fails after 1.5 hr or 2 hr. Workaround is to restart the impdp after restarting database and asm. This is phase 1 of import. 
3. Create indexes in parallel.
Like I mentioned the credit for this goes to Randolf Geist. I only applied it with slight modifications to suit our environment and my requirements. Here is the readme I prepared for my usage.


parallel index creation
Step 1: Create sql file that has the sql for all indexes
directory=linux_mig
dumpfile=aexp%U.dmp
sqlfile=all_indexes.sql
include=INDEX

That will create all the indexes. Open the file and remove extra portions that has create index for fnd_lobs_cts and other domain indexes. Call it all_index_mode.sql. I will update document on what needs to be removed exactly. But you can open the file all_indexes.sql and also figure out.
cleanup of file:
 %s!ALTER SESSION SET EDITION= "ORA$BASE";!!g
DOTN DO THIS STEP IN PARALLEL WITH THE MAIN IMPORT. DO IT BEFORE OR AFTER THE IMPORT.
Step 2:


SQL>
SQL> grant execute on dbms_aq to kmadduri;

Grant succeeded.

SQL> grant execute on dbms_aqadm to kmadduri;

Grant succeeded.

SQL> grant execute on dbms_scheduler to kmadduri;

Grant succeeded.

grant create table, create type, create procedure to kmadduri;

OS Path: /linux_migration/OF2INT -- This is the path of export dump files.

Enter the file name
of the file containing the CREATE INDEX DDL commands

Please double-check that this file contains the intended
DDL commands, in particular this should NOT be a genuine SQLFILE
generated via IMPDP that has NOT been post-processed by the
"transform_all_sql.sh" unix script (see documentation for more details)

Furthermore please note that at present the code does not
tokenize the file contents but simply executes the commands
delimited by semicolon. For the intended purpose of creating indexes
this should be sufficient except for rare cases of function-based
indexes containing semicolons in string expressions as part of the
virtual column definition, but other SQL like complex CTAS statements
might contain such additional semicolons - use with care in such cases.

Filename: all_index_mod.sql

Enter the number of concurrent worker threads for serial indexes
The default is NULL which means use CPU_COUNT * INSTANCES threads

If you do not want two sets of workers enter here the total number
of worker threads and 0 for the number of threads for the second worker set

You need then also to enter the SAME worker thread name for both
sets in the following prompts otherwise this will not work as expected

Serial Degree (default NULL): 64

Enter the number of concurrent worker threads for parallel indexes
The default is 1, enter a number or NULL for automatic degree
which is again CPU_COUNT * INSTANCES threads (could be high for parallel DDL)

If you do not want two sets of workers enter here 0
for the number of threads of the second worker set

You need then also to enter the SAME worker thread name for both
sets in the following prompts otherwise this will not work as expected

Parallel Degree (default 1): 0

Enter the name of worker thread 1
The default is SERIAL_INDEX

If you do not want two sets of workers enter here the same name
as for the second worker thread on the next prompt
otherwise this will not work as expected
Maximum length of identifier: 20 chars

Worker set 1 name (default SERIAL_INDEX):

Enter the name of worker thread 2
The default is PARALLEL_INDEX

If you do not want two sets of workers enter here the same name
as for the first worker thread on the previous prompt
otherwise this will not work as expected
Maximum length of identifier: 20 chars

Worker set 2 name (default PARALLEL_INDEX): SERIAL_INDEX

Starting concurrent index creation...

Check AQ$CREATE_INDEX_QUEUE view for number of tasks
Check CREATE_INDEX_CONCURRENT_LOG table for progress and errors
You are about to start concurrent index creation now...

OS Path             : /linux_migration/OF2INT
Filename            : all_index_mod.sql
Serial Degree       : 64
Parallel Degree     : 0
Worker Thread 1 Name: SERIAL_INDEX
Worker Thread 2 Name: SERIAL_INDEX

Hit CTRL+C to cancel, ENTER to continue...

SQL>
SQL> create or replace directory CREATE_INDEX_CONCURRENT_DIR as '&os_path';
old   1: create or replace directory CREATE_INDEX_CONCURRENT_DIR as '&os_path'
new   1: create or replace directory CREATE_INDEX_CONCURRENT_DIR as '/linux_migration/OF2INT'

Directory created.

Elapsed: 00:00:00.01
SQL>
SQL> begin
  2    pk_create_index_concurrent.create_index_concurrent(
  3      p_directory_name        => 'CREATE_INDEX_CONCURRENT_DIR'
  4    , p_file_name             => '&file_name'
  5    , p_parallel_degree_set_1 => &serial_degree
  6    , p_parallel_degree_set_2 => &parallel_degree
  7    , p_worker_set_id_1       => '&thread_1_name'
  8    , p_worker_set_id_2       => '&thread_2_name'
  9    );
 10  end;
 11  /
old   4:   , p_file_name             => '&file_name'
new   4:   , p_file_name             => 'all_index_mod.sql'
old   5:   , p_parallel_degree_set_1 => &serial_degree
new   5:   , p_parallel_degree_set_1 => 64
old   6:   , p_parallel_degree_set_2 => &parallel_degree
new   6:   , p_parallel_degree_set_2 => 0
old   7:   , p_worker_set_id_1       => '&thread_1_name'
new   7:   , p_worker_set_id_1       => 'SERIAL_INDEX'
old   8:   , p_worker_set_id_2       => '&thread_2_name'
new   8:   , p_worker_set_id_2       => 'SERIAL_INDEX'


4. While step 3 is in progress, some indexes would be created by datapump as well. That is fine. There may be some locking (I did not get any locks but you could and I just thought I would monitor and kill them instead of stopping the import and continuing with prallel index creation and start the impdp job. This actually saved some time  because both the impdp and the parallel index creation script were creating indexes but the majority of them were created by the parallel index script).  Once Step 3 completes and when datapump is creating fnd_lobs_ctx index , skip the job because fnd_lobs_ctx takes long time to complete and we can do it in parallel while other post tasks are running. There is a note on metalink which gives steps on how to speed up fnd_lobs_ctx. I also removed all the domain index creation scripts from step 3 and decided to run them from sql prompt. 

This whole process took around 7 hrs to complete (step 1 to step 4). 

5. Revoke privilege from target system schema (revoke EXEMPT ACCESS POLICY from system;)

6. Reset Advanced Queues
    sqlplus /nolog
 connect / as sysdba; 
 @auque2.sq

7. Copy listener.ora,tnsnames.ora from old host and change the host names
8. Run adgrants.sql
   sqlplus '/as sysdba' 
    @adgrants.sql apps 

9. Grant create procedure privilege on CTXSYS ( sqlplus apps/[APPS password]  @adctxprv.sql system_pwd CTXSYS)

10. Run AutoConfig
-- Change context files on apps tiers to reflect new db host. 
-- EXEC FND_CONC_CLONE.SETUP_CLEAN; 
-- Make sure ISTWatch is included in listener.ora. This was required for us. May not be required for all implementations.

11. Change dads.conf on each middle tier. This is also required for us because we still use modplsql even though we are on 12.1.2

12. Gather statistics for SYS schema 
sqlplus "/ as sysdba" 
  alter system enable restricted session; 
  @adstats.sql 
 sqlplus "/ as sysdba" 
 alter system disable restricted session; 

13Re-create custom database links 

14. Create ConText and AZ objects
      Apply patch 7129002
   cd $AU_TOP/patch/115/bin/ 
  perl $AU_TOP/patch/115/bin/dpost_imp.pl step9.drv 11 
Once the driver file has been generated, use AutoPatch to apply it on the target administration server node. 
You may get warning that patch is applied , do you want to continue. Say Yes.

15. Populate CTXSYS.DR$SQE table 
    sqlplus apps/[apps password] 
exec icx_cat_sqe_pvt.sync_sqes_for_all_zones;

16. Compile invalid objects (@utlrp)

17.  Run AD Administration on the target administration server node. From the Maintain Applications Database Objects menu, perform the following tasks: 
   1. 
      Compile flexfield data in AOL tables 
   2. 
      Recreate grants and synonyms for APPS schema 
   

18.  create xla packages as apps user
 sqlplus apps/[apps password] 
  
 Apply  patch 10396457 
    SQL> @xla6128278.sql 
Enter value for outfilename: xla6128278.sql

This step is not documented in the metalink note above. I got this from an exadata oracle apps migration white paper. But this wil create the large xla packages which would fail to be created and hence compiled properly.

Other issues:
-- csr_rules_b does not get imported

SQL> PURGE RECYCLEBIN; 
Recyclebin purged. 
  set serveroutput on 
  BEGIN 
   DBMS_XMLSCHEMA.deleteSchema( schemaurl => 'http://xmlns.oracle.com/CRM/Scheduler/Rules', delete_option => DBMS_XMLSCHEMA.DELETE_CASCADE_FORCE    ); 
    COMMIT; 
     EXCEPTION 
    WHEN OTHERS THEN 
    dbms_output.put_line('Encountered ' || SQLERRM || ' when deleting Scheduler Rules XML Schema'); 
     END; 
-- reset the password of csr user to the same oen as apps (first check if apps passowrd works for csr) 
FNDCPASS apps/xxxxx 0 Y system/xxxx ORACLE CSR xxxxx 
 cd $CSR_TOP/patch/115/sql 
  @csrrsreg.sql apps xxxxx csr xxxxx

--- WWV_FLOW_GENERATE_DDL  IS INVALID

grant select on dba_db_links to FLOWS_030100; 
 grant select on DBA_DEPENDENCIES to FLOWS_030100; 
 grant select on DBA_TRIGGERS to FLOWS_030100; 
 grant select on DBA_TYPES to FLOWS_030100; 
 ANDge FLOWS_030100.WWV_FLOW_GENERATE_DDL compile body;


-- Drop and recreate fnd_lobs_ctx
   cd $FND_TOP/sql 
drop index applsys.fnd_lobs_ctx;         
update fnd_lobs 
set FILE_FORMAT = 'IGNORE' 
where NVL(PROGRAM_NAME,'NULL') != 'FND_HELP' ; 
 @aflobbld.sql applsys apps 
update fnd_lobs 
set file_format = fnd_gfm.set_file_format(file_content_type); 
select 
FILE_FORMAT, 
count(*) 
from 
fnd_lobs 

-- Gather stats complete with errors
    select hc.table_name, hc.column_name  
from FND_HISTOGRAM_COLS hc , dba_tab_columns tc 
where  hc.table_name ='FII_FIN_ITEM_HIERARCHIES' 
 and hc.table_name= tc.table_name (+) 
and hc.column_name = tc.column_name (+) 
and tc.column_name is null 
delete from FND_HISTOGRAM_COLS 
where (table_name, column_name) in 
  ( 
   select hc.table_name, hc.column_name 
   from FND_HISTOGRAM_COLS hc , dba_tab_columns tc 
   where hc.table_name  ='FII_FIN_ITEM_HIERARCHIES' 
   and hc.table_name= tc.table_name (+) 
   and hc.column_name = tc.column_name (+) 
   and tc.column_name is null 
  ); 
  exec dbms_stats.unlock_schema_stats('APPLSYS') 

-- Additional Privs

GRANT ALTER ON "CSR"."CSR_RULE_WINDOWS_B" TO "APPS" WITH GRANT OPTION; 
GRANT DELETE ON "CSR"."CSR_RULE_WINDOWS_B" TO "APPS" WITH GRANT OPTION; 
GRANT INDEX ON "CSR"."CSR_RULE_WINDOWS_B" TO "APPS" WITH GRANT OPTION; 
GRANT INSERT ON "CSR"."CSR_RULE_WINDOWS_B" TO "APPS" WITH GRANT OPTION; 
GRANT SELECT ON "CSR"."CSR_RULE_WINDOWS_B" TO "APPS" WITH GRANT OPTION; 
GRANT UPDATE ON "CSR"."CSR_RULE_WINDOWS_B" TO "APPS" WITH GRANT OPTION; 
GRANT REFERENCES ON "CSR"."CSR_RULE_WINDOWS_B" TO "APPS" WITH GRANT OPTION; 
GRANT ON COMMIT REFRESH ON "CSR"."CSR_RULE_WINDOWS_B" TO "APPS" WITH GRANT OPTION; 
GRANT QUERY REWRITE ON "CSR"."CSR_RULE_WINDOWS_B" TO "APPS" WITH GRANT OPTION; 
GRANT DEBUG ON "CSR"."CSR_RULE_WINDOWS_B" TO "APPS" WITH GRANT OPTION; 
GRANT FLASHBACK ON "CSR"."CSR_RULE_WINDOWS_B" TO "APPS" WITH GRANT OPTION;

-- ACL Setup 
Drop existing ACL and create ACL's similiar to the ones on Solaris 

-- compile all custom forms



   
   

No comments:

Post a Comment