For this setup, I create a folder “./out” and a folder “./log” on the OS and the becessary pipes.
In the source database, I create as many users I need to, each of these user has at least the privilege to export the entire database.
If you add (and I did), FGA policies on tables of a specific user, you can simulate a parallel export in database version that do not support this natively.
mstr.sh >> #!/bin/ksh datum=`date '+%Y.%m.%d@%H:%M.%S'` cp /dev/null mstr.log echo "#-----------" started overall $datum "--------------#" > mstr.log unset datum (time ./one.sh) 1> ./out/one.out 2>&1 < /dev/null & (time ./two.sh) 1> ./out/two.out 2>&1 < /dev/null & (time ./3ee.sh) 1> ./out/3ee.out 2>&1 < /dev/null & wait ./tarring.sh wait datum=`date '+%Y.%m.%d@%H:%M.%S'` echo "#-----------" stopped overall $datum "--------------#" >> mstr.log unset datum uuencode output.tar.Z output.tar.Z|mailx -r recipient -s "MSTR logging" recipient
Reminder, this master log will not contain the end of the logging, because the log will be send before the final remark is pasted in it.
supp.sh >> #!/bin/ksh dateone=`date '+%Y.%m.%d@%H:%M.%S'` sleep 1 echo "#---" started EXPONE $dateone >> mstr.log unset datumone nohup compress < pipe_one > /filesystem/$(date +"%Y%m%d")/$(date +"%Y%m%d")_EXPONE.dmp.Z & sleep 1 nohup exp parfile=expone.par & wait datumone=`date '+%Y.%m.%d@%H:%M.%S'` echo "#--- ---" stopped EXPONE $dateone >> mstr.log unset dateone
expone.par >> userid=EXPONE/export file=pipe_one log=log/date_EXPONE.log consistent=y direct=y buffer=268435456 grants=n constraints=n owner=(ownone, owntwo, own3ee) rows=y compress=y
When you would like to use FGA, to simulate a parallel export, use the “tables=” clause. Be sure to set “direct=n” otherwise all auditted tables will be exported regardless the policy.
tarring.sh >> #!/bin/ksh datetar=`date '+%Y.%m.%d@%H:%M.%S'` echo "#---" started tarring $datetar >> mstr.log unset datetar tar -cvf output.tar ./out tar -uvf output.tar mstr.log compress output.tar datetar=`date '+%Y.%m.%d@%H:%M.%S'` echo "#---" ended tarring $datetar >> mstr.log unset datetar
The other way round:
one.sh >> #!/bin/ksh dateone=`date '+%Y.%m.%d@%H:%M.%S'` sleep 1 echo "#---" started IMPONE $dateone >> mstr.log unset dateone zcat /filesystem/date_EXPONE.dmp.Z > pipe & sleep 3 nohup imp parfile=impone.par & wait dateone=`date '+%Y.%m.%d@%H:%M.%S'` echo "#--- ---" stopped IMPONE $dateone >> mstr.log unset dateone
impsupp.par >> userid=IMPONE/import file=pipe log=log/20101120_IMPSUPP.log buffer=100000 full=y ignore=y skip_unusable_indexes=y
The “skip_unusable_indexes=y” was used in a situation, where a full db export was loaded into the target database without the data.
I had a refresh protocol set up, which allowed me to sync the target db with the source db after the partial export/import was done.