Please follow the steps from this Clone_Document
This blog is to share my experiences and learnings on Oracle Database,EBS,OCI & Ansible.
Showing posts with label COREDBA_TECH_DOCS. Show all posts
Showing posts with label COREDBA_TECH_DOCS. Show all posts
Monday, February 19, 2018
Saturday, June 3, 2017
RMAN BASICS
Lets begin with RMAN overviews:
RMAN stands for Recovery Manager, this is client who performs not only backup and recovery for oracle database but also making DBA’s life easier in terms of automation, backup, restore and recovery of database. RMAN consist of some backup utilities and collaborated with your oracle database that combine play a role in database backup.
RMAN components:
Target Database:
Target database is the registered database with the RMAN, on which RMAN is performing backup and recovery operations. All operations performed and practiced ( i.e.metadata is also known as repository. ) by RMAN would be logged in database control file.
RMAN client:
RMAN client is nothing but the executable required to run RMAN and performers its desire task. RMAN client situated in ‘ORACLE_HOME/bin’. No extra efforts would be required to install this client, because this client installed automatically when oracle software being installed.
How RMAN Works Internally:
RMAN backup and recovery operation for a target database are managed by RMAN client. RMAN uses the target database control file to gather metadata about the target database and to store information about its own operations. The RMAN client itself does not perform backup, restore, or recovery operations. When you connect the RMAN client to a target database, RMAN allocates server sessions on the target instance and directs them to perform the operations.The work of backup and recovery is performed by server sessions running on the target database. A channel establishes a connection from the RMAN client to a target or auxiliary database instance by starting a server session on the instance.The channel reads data into memory, processes it, and writes it to the output device.
When you take a database backup using RMAN, you need to connect to the target database using RMAN Client.The RMAN client can use Oracle Net to connect to a target database, so it can be located on any host that is connected to the target host through Oracle Net. For backup you need to allocate explicit or implicit channel to the target database. An RMAN channel represents one stream of data to a device, and corresponds to one database server session. This session dynamically collect information of the files from the target database control file before taking the backup or while restoring.
For example If you give ‘ Backup database ‘ from RMAN, it will first get all the datafiles information from the controlfile. Then it will divide all the datafiles among the allocated channels. ( roughly equal size of work as per the datafile size ). Then it takes the backup in 2 steps. In the first step the channel will read all the Blocks of the entire datafile to find out all the formatted blocks to backup. Note : RMAN do not take backup of the un formatted blocks. In the second step it take backup of the formatted blocks. This is the best advantage of using RMAN as it only take backup of the required blocks. Lets say in a datafile of 100 MB size, there may be only 10 MB of use full data and rest 90 MB is free then RMAN will only take backup of those 10 MB
RMAN stands for Recovery Manager, this is client who performs not only backup and recovery for oracle database but also making DBA’s life easier in terms of automation, backup, restore and recovery of database. RMAN consist of some backup utilities and collaborated with your oracle database that combine play a role in database backup.
RMAN components:
Target Database:
Target database is the registered database with the RMAN, on which RMAN is performing backup and recovery operations. All operations performed and practiced ( i.e.metadata is also known as repository. ) by RMAN would be logged in database control file.
RMAN client:
RMAN client is nothing but the executable required to run RMAN and performers its desire task. RMAN client situated in ‘ORACLE_HOME/bin’. No extra efforts would be required to install this client, because this client installed automatically when oracle software being installed.
How RMAN Works Internally:
RMAN backup and recovery operation for a target database are managed by RMAN client. RMAN uses the target database control file to gather metadata about the target database and to store information about its own operations. The RMAN client itself does not perform backup, restore, or recovery operations. When you connect the RMAN client to a target database, RMAN allocates server sessions on the target instance and directs them to perform the operations.The work of backup and recovery is performed by server sessions running on the target database. A channel establishes a connection from the RMAN client to a target or auxiliary database instance by starting a server session on the instance.The channel reads data into memory, processes it, and writes it to the output device.
When you take a database backup using RMAN, you need to connect to the target database using RMAN Client.The RMAN client can use Oracle Net to connect to a target database, so it can be located on any host that is connected to the target host through Oracle Net. For backup you need to allocate explicit or implicit channel to the target database. An RMAN channel represents one stream of data to a device, and corresponds to one database server session. This session dynamically collect information of the files from the target database control file before taking the backup or while restoring.
For example If you give ‘ Backup database ‘ from RMAN, it will first get all the datafiles information from the controlfile. Then it will divide all the datafiles among the allocated channels. ( roughly equal size of work as per the datafile size ). Then it takes the backup in 2 steps. In the first step the channel will read all the Blocks of the entire datafile to find out all the formatted blocks to backup. Note : RMAN do not take backup of the un formatted blocks. In the second step it take backup of the formatted blocks. This is the best advantage of using RMAN as it only take backup of the required blocks. Lets say in a datafile of 100 MB size, there may be only 10 MB of use full data and rest 90 MB is free then RMAN will only take backup of those 10 MB
Thursday, March 30, 2017
Which device is mapped Oracle ASM disk?
[root@rac2 ~]# oracleasm listdisks
CRSVOL1
DATAVOL1
FRAVOL1
[root@rac2 ~]# oracleasm querydisk -d DATAVOL1
Disk "DATAVOL1" is a valid ASM disk on device [8,33]
[root@rac2 ~]# ls -ld /dev/*|grep "8, 33"
brw-rw---- 1 root disk 8, 33 Mar 29 19:09 /dev/sdc1
CRSVOL1
DATAVOL1
FRAVOL1
[root@rac2 ~]# oracleasm querydisk -d DATAVOL1
Disk "DATAVOL1" is a valid ASM disk on device [8,33]
[root@rac2 ~]# ls -ld /dev/*|grep "8, 33"
brw-rw---- 1 root disk 8, 33 Mar 29 19:09 /dev/sdc1
Monday, March 6, 2017
How to check current SCN ?
SQL> select timestamp_to_scn(to_timestamp('01/01/2017 00:30:00','DD/MM/YYYY HH24:MI:SS')) as scn from dual;
SCN
----------
6.4792E+12
SQL> set numwidth 30
SQL> select timestamp_to_scn(to_timestamp('01/01/2017 00:30:00','DD/MM/YYYY HH24:MI:SS')) as scn from dual;
SCN
------------------------------
6479247526568
SQL> select scn_to_timestamp(6479247526568) from dual;
SCN_TO_TIMESTAMP(6479247526568)
---------------------------------------------------------------------------
01-JAN-17 12.29.58.000000000 AM
SQL>
SCN
----------
6.4792E+12
SQL> set numwidth 30
SQL> select timestamp_to_scn(to_timestamp('01/01/2017 00:30:00','DD/MM/YYYY HH24:MI:SS')) as scn from dual;
SCN
------------------------------
6479247526568
SQL> select scn_to_timestamp(6479247526568) from dual;
SCN_TO_TIMESTAMP(6479247526568)
---------------------------------------------------------------------------
01-JAN-17 12.29.58.000000000 AM
SQL>
Monday, November 28, 2016
ASMLIB
This document describes some advantages the Linux specific ASM library provided by Oracle (herein "ASMLib") brings to the administration of a Linux system running Oracle. Linux often presents the challenge of disk name persistence. Change the storage configuration and a disk that appeared as /dev/sdg yesterday can appear as /dev/sdh after a reboot today. How can these changes be isolated so that they do not affect ASM?
Why Not Let ASM Scan All Disks?
ASM scans all disks it is allowed to discover (via the asm_diskstring). Why not scan all the disks and let ASM determine which it cares about, rather than even worrying about disk name persistence?
The question is notionally correct. If you pass /dev/sd* to ASM, and ASM can read the devices, ASM can indeed pick out its disks regardless of whether /dev/sdg has changed to/dev/sdh on this particular boot.
Why Not Let ASM Scan All Disks?
ASM scans all disks it is allowed to discover (via the asm_diskstring). Why not scan all the disks and let ASM determine which it cares about, rather than even worrying about disk name persistence?
The question is notionally correct. If you pass /dev/sd* to ASM, and ASM can read the devices, ASM can indeed pick out its disks regardless of whether /dev/sdg has changed to/dev/sdh on this particular boot.
However, to read these devices, ASM has to have permission to read these devices. That means ASM has to have user or group ownership on all devices /dev/sd*, including any system disks. Most system administrators do not want to have the oracle user own system disks just so ASM can ignore them. The potential for mistakes (DBA writing over the /homevolume, etc) is way too high.
ASMLib vs UDev or DevLabel
There are various methods to provide names that do not change, including devlabel and udev. What does ASMLib provide that these solutions do not?
The bigger problem is not specifically a persistent name - it is matching that name to a set of permissions. It doesn't matter if /dev/sdg is now /dev/sdh, as long as the new /dev/sdh hasoracle:dba ownership and the new /dev/sdg - which used to be /dev/sdf - has the ownership the old /dev/sdf used to have. The easiest way to ensure that permissions are correct is persistent naming. If a disk always appears as the same name, you can always apply the same permissions to it without worrying. In addition, you can then exclude names that match system disks. Even if the permissions are right, a system administrator isn't going to want ASM scanning system disks every time.
Now, udev or devlabel can handle keeping sdg as sdg (or /dev/mydisk, whatever). What does ASMLib add? A few things, actually. With ASMLib, there is a simple command to label a disk for ASM. With udev, you'll have to modify the udev configuration file for each disk you add. You'll have to determine a unique id to match the disk and learn the udev configuration syntax.
The name is also human-readable. With an Apple XServe RAID, why have a disk named/dev/sdg when it can be DRAWER1DISK2? ASMLib can also list all disks, where with udev you have to either know in your head that sdg, sdf, and sdj are for ASM, or you have to provide names. With ASMLib, there is no chance of ASM itself scanning system disks. In fact, ASMLib never modifies the system's names for disks. ASMLib never uses the name " /dev/sdg". After boot-time querying the disks, it provides its own access to the devices with permissions for Oracle. /dev/sdg is still owned by root:root, and the oracle user still cannot access the device by that name.
The configuration is persistent. Reinstall a system and your udev configuration is gone. ASMLib's labels are not. With udev, you have to copy the configuration over to the other nodes in a RAC. If you have sixteen nodes, you have to copy each configuration change to all sixteen nodes. Whether you use udev or devlabel, you have to set the permissions properly on all sixteen nodes. ASMLib just requires one invocation of " /etc/init.d/oracleasm scandisks" to pick up all changes made on the other node.
ASMLib vs UDev or DevLabel
There are various methods to provide names that do not change, including devlabel and udev. What does ASMLib provide that these solutions do not?
The bigger problem is not specifically a persistent name - it is matching that name to a set of permissions. It doesn't matter if /dev/sdg is now /dev/sdh, as long as the new /dev/sdh hasoracle:dba ownership and the new /dev/sdg - which used to be /dev/sdf - has the ownership the old /dev/sdf used to have. The easiest way to ensure that permissions are correct is persistent naming. If a disk always appears as the same name, you can always apply the same permissions to it without worrying. In addition, you can then exclude names that match system disks. Even if the permissions are right, a system administrator isn't going to want ASM scanning system disks every time.
Now, udev or devlabel can handle keeping sdg as sdg (or /dev/mydisk, whatever). What does ASMLib add? A few things, actually. With ASMLib, there is a simple command to label a disk for ASM. With udev, you'll have to modify the udev configuration file for each disk you add. You'll have to determine a unique id to match the disk and learn the udev configuration syntax.
The name is also human-readable. With an Apple XServe RAID, why have a disk named/dev/sdg when it can be DRAWER1DISK2? ASMLib can also list all disks, where with udev you have to either know in your head that sdg, sdf, and sdj are for ASM, or you have to provide names. With ASMLib, there is no chance of ASM itself scanning system disks. In fact, ASMLib never modifies the system's names for disks. ASMLib never uses the name " /dev/sdg". After boot-time querying the disks, it provides its own access to the devices with permissions for Oracle. /dev/sdg is still owned by root:root, and the oracle user still cannot access the device by that name.
The configuration is persistent. Reinstall a system and your udev configuration is gone. ASMLib's labels are not. With udev, you have to copy the configuration over to the other nodes in a RAC. If you have sixteen nodes, you have to copy each configuration change to all sixteen nodes. Whether you use udev or devlabel, you have to set the permissions properly on all sixteen nodes. ASMLib just requires one invocation of " /etc/init.d/oracleasm scandisks" to pick up all changes made on the other node.
Tuesday, November 8, 2016
Oracle Wallet (Orapki commands)
Wallets:
A wallet is a password-protected container used to store authentication and signing credentials, including private keys, certificates, and trusted certificates needed by SSL. The wallets it creates can be read by Oracle Database, Oracle Application Servers (OHS) , and the Oracle Identity Management infrastructure.
There are three types of wallets.
1.Password protected - ewallet.p12
Required password for all operations like display,add,delete
2.Password protected with autologin- ewallet.p12& cwallet.sso
Required password for only add,delete not for display. It can be read by oracle products without password.
3.Auto_login_only - cwallet.sso
Doesn’t required password for any operations.
This command will prompt you to enter and re-enter a wallet password. It creates a wallet in the location specified for -wallet.
orapki wallet create -wallet wallet_location -auto_login
This command creates a wallet with auto-login enabled, or it can also be used to enable auto-login on an existing wallet. If the wallet_location already contains a wallet, then auto-login will be enabled for it. To disable the auto-login feature, delete cwallet.sso.For wallets with the auto-login feature enabled, you are prompted for a password only for operations that modify the wallet, such as add
This command creates an auto login wallet (cwallet.sso) that does not need a password to open. You can also modify or delete the wallet without using a password. File system permissions provide the necessary security for such auto login wallets.
4.To view an Oracle wallet:
orapki wallet display -wallet wallet_location
orapki wallet change_pwd -wallet wallet_location [-oldpwd password ] [-newpwd password]
openssl pkcs12 -export -in jaydba_blogspot_com_cert.cer -inkey jaydba_blogspot_com.key -cerfile jaydba_blogspot_com_interm.cer -out ewallet.p12
b.Enable auto login:
orapki wallet create -wallet . -auto_login
[or]
If you face Issues while enabling Autologin:
In Oracle HTTP server 12.1.3 an exception thrown when attempting to set the "auto-login" flag on a a wallet using the "orapki" command,I got this error.
Exception in thread "main" java.lang.NullPointerException
at oracle.security.pki.OracleKeyStoreSpi.a(Unknown Source)
at oracle.security.pki.OracleSSOKeyStoreSpi.a(Unknown Source)
at oracle.security.pki.OracleFileWalletImpl.b(Unknown Source)
at oracle.security.pki.OracleWallet.saveSSO(Unknown Source)
at oracle.security.pki.textui.OracleWalletTextUI.create(Unknown Source)
at oracle.security.pki.textui.OracleWalletTextUI.command(Unknown Source)
at oracle.security.pki.textui.OraclePKITextUI.main(Unknown Source)"
Please follow below steps as an alternate.
a.Create Wallet using openssl:
openssl pkcs12 -export -in jaydba_blogspot_com_cert.cer -inkey jaydba_blogspot_com.key -cerfile jaydba_blogspot_com_interm.cer -out ewallet.p12
cp ewallet.p12 temp.p12
b. Create Auto login Wallet:
orapki wallet create -wallet -auto_login -with_trust_flags -compat_v12
c. Finally import the pkcs12 file from step 1.
orapki wallet import_pkcs12 -wallet . -pkcs12file temp.p12
orapki wallet create -wallet ./ -pwd password -auto_login
b.Migrate the JKS keystore entries to the wallet:
orapki wallet jks_to_pkcs12 -wallet ./ -pwd password -keystore ./ewallet.jks -jkspwd password
A wallet is a password-protected container used to store authentication and signing credentials, including private keys, certificates, and trusted certificates needed by SSL. The wallets it creates can be read by Oracle Database, Oracle Application Servers (OHS) , and the Oracle Identity Management infrastructure.
There are three types of wallets.
1.Password protected - ewallet.p12
Required password for all operations like display,add,delete
2.Password protected with autologin- ewallet.p12& cwallet.sso
Required password for only add,delete not for display. It can be read by oracle products without password.
3.Auto_login_only - cwallet.sso
Doesn’t required password for any operations.
Creating and Viewing Oracle Wallets with orapki:
1.password-protected wallet creation:
orapki wallet create -wallet wallet_locationThis command will prompt you to enter and re-enter a wallet password. It creates a wallet in the location specified for -wallet.
2.Password-Protected with auto-login enabled: (Used in our environments, Typically used by all)
orapki wallet create -wallet wallet_location -auto_login
This command creates a wallet with auto-login enabled, or it can also be used to enable auto-login on an existing wallet. If the wallet_location already contains a wallet, then auto-login will be enabled for it. To disable the auto-login feature, delete cwallet.sso.For wallets with the auto-login feature enabled, you are prompted for a password only for operations that modify the wallet, such as add
3. Autologin wallets:
To create an auto login wallet (cwallet.sso) that does not need a password, use the following command:
orapki wallet create -wallet wallet_location -auto_login_only
This command creates an auto login wallet (cwallet.sso) that does not need a password to open. You can also modify or delete the wallet without using a password. File system permissions provide the necessary security for such auto login wallets.
4.To view an Oracle wallet:
orapki wallet display -wallet wallet_location
5.Modifying the Password for a Wallet:
To change the wallet password, use the following command:
orapki wallet change_pwd -wallet wallet_location [-oldpwd password ] [-newpwd password]
6.How to add private key and certificates generated using openssl?
a.Create Wallet using openssl:
b.Enable auto login:
orapki wallet create -wallet . -auto_login
[or]
If you face Issues while enabling Autologin:
In Oracle HTTP server 12.1.3 an exception thrown when attempting to set the "auto-login" flag on a a wallet using the "orapki" command,I got this error.
Exception in thread "main" java.lang.NullPointerException
at oracle.security.pki.OracleKeyStoreSpi.a(Unknown Source)
at oracle.security.pki.OracleSSOKeyStoreSpi.a(Unknown Source)
at oracle.security.pki.OracleFileWalletImpl.b(Unknown Source)
at oracle.security.pki.OracleWallet.saveSSO(Unknown Source)
at oracle.security.pki.textui.OracleWalletTextUI.create(Unknown Source)
at oracle.security.pki.textui.OracleWalletTextUI.command(Unknown Source)
at oracle.security.pki.textui.OraclePKITextUI.main(Unknown Source)"
Please follow below steps as an alternate.
a.Create Wallet using openssl:
cp ewallet.p12 temp.p12
b. Create Auto login Wallet:
orapki wallet create -wallet -auto_login -with_trust_flags -compat_v12
c. Finally import the pkcs12 file from step 1.
orapki wallet import_pkcs12 -wallet . -pkcs12file temp.p12
7.How to Convert JKS to wallet:
a.create a password protected Oracle wallet with autologin:orapki wallet create -wallet ./ -pwd password -auto_login
b.Migrate the JKS keystore entries to the wallet:
orapki wallet jks_to_pkcs12 -wallet ./ -pwd password -keystore ./ewallet.jks -jkspwd password
Wednesday, July 27, 2016
How to drop and recreate TEMP Tablespace in Oracle 11g
Today we had one request to shrink temp tablespace ..For this task we dont need any downtime.
High level steps:
a. Create tablespace temp2
b.Make temp2 as default
c.Drop tablespace temp
d.Make tablespace temp
e.Make temp as default.
f.Drop temp2 tablespace.
Output:
1.Create Temporary Tablespace Temp2:
SQL> CREATE TEMPORARY TABLESPACE TEMP2 TEMPFILE '/backup/JAY_TMP/temp01.dbf' SIZE 4096M;
Tablespace created.
2. Move Default temp tablespace as temp2
SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp2;
Database altered.
3. Make sure No sessions are using your Old Temp tablespace
a. Find Session Number from V$SORT_USAGE:
SELECT USERNAME, SESSION_NUM, SESSION_ADDR FROM V$SORT_USAGE;
b. Find Session ID from V$SESSION:
If the resultset contains any tows then your next step will be to find the SID from the V$SESSION view. You can find session id by using SESSION_NUM or SESSION_ADDR from previous resultset.
SELECT SID, SERIAL#, STATUS FROM V$SESSION WHERE SERIAL#=SESSION_NUM;
OR
SELECT SID, SERIAL#, STATUS FROM V$SESSION WHERE SADDR=SESSION_ADDR;
Kill the sessions.
4. Drop temp tablespace
SQL> DROP TABLESPACE temp INCLUDING CONTENTS AND DATAFILES;
Tablespace dropped.
5. Recreate Tablespace Temp
SQL> CREATE TEMPORARY TABLESPACE TEMP TEMPFILE '/backup/JAY_TMP/temp1.dbf' SIZE 4096M,'/backup/JAY_TMP/temp2.dbf' SIZE 4096M;
Tablespace created.
6 Move Tablespace Temp, back to new temp tablespace
SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp;
Database altered.
7. Drop temporary tablespace temp2:
SQL> DROP TABLESPACE temp2 INCLUDING CONTENTS AND DATAFILES;
Tablespace dropped.
No need to do shutdown when drop temp tablespace and the recreate it. If something happen with temp tablespaces e.g. : crash, corrupt, etc. Oracle database will ignore the error, but DML (insert,update,delete) and SELECT Query will suffer.
High level steps:
a. Create tablespace temp2
b.Make temp2 as default
c.Drop tablespace temp
d.Make tablespace temp
e.Make temp as default.
f.Drop temp2 tablespace.
Output:
1.Create Temporary Tablespace Temp2:
SQL> CREATE TEMPORARY TABLESPACE TEMP2 TEMPFILE '/backup/JAY_TMP/temp01.dbf' SIZE 4096M;
Tablespace created.
2. Move Default temp tablespace as temp2
SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp2;
Database altered.
3. Make sure No sessions are using your Old Temp tablespace
a. Find Session Number from V$SORT_USAGE:
SELECT USERNAME, SESSION_NUM, SESSION_ADDR FROM V$SORT_USAGE;
b. Find Session ID from V$SESSION:
If the resultset contains any tows then your next step will be to find the SID from the V$SESSION view. You can find session id by using SESSION_NUM or SESSION_ADDR from previous resultset.
SELECT SID, SERIAL#, STATUS FROM V$SESSION WHERE SERIAL#=SESSION_NUM;
OR
SELECT SID, SERIAL#, STATUS FROM V$SESSION WHERE SADDR=SESSION_ADDR;
Kill the sessions.
4. Drop temp tablespace
SQL> DROP TABLESPACE temp INCLUDING CONTENTS AND DATAFILES;
Tablespace dropped.
5. Recreate Tablespace Temp
SQL> CREATE TEMPORARY TABLESPACE TEMP TEMPFILE '/backup/JAY_TMP/temp1.dbf' SIZE 4096M,'/backup/JAY_TMP/temp2.dbf' SIZE 4096M;
Tablespace created.
6 Move Tablespace Temp, back to new temp tablespace
SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp;
Database altered.
7. Drop temporary tablespace temp2:
SQL> DROP TABLESPACE temp2 INCLUDING CONTENTS AND DATAFILES;
Tablespace dropped.
No need to do shutdown when drop temp tablespace and the recreate it. If something happen with temp tablespaces e.g. : crash, corrupt, etc. Oracle database will ignore the error, but DML (insert,update,delete) and SELECT Query will suffer.
Database crashed due to 100% usage of datfile mountpoint & Recovery steps
Today one of my database was crashed due to 100% usage of datafile mountpoint. We are getting below error in alert logs.
Linux-x86_64 Error: 28: No space left on device
ORA-63999: data file suffered media failure
ORA-01114: IO error writing block to file 14 (block # 2039938)
ORA-01110: data file 14: '/db05/oradata/JC2INT/vino_data05.dbf'
This is due to hidden parameter _datafile_write_errors_crash_instance . Due to this ANY write error to a datafile will cause the instance to abort.
We need to increase the storage.As a temporary solution, I thought to move tempfiles to another mountpoint and tried to start the database.
In our case DB is already down.Then we need to move the temp files from /db05 to /backup mountpoint to get some freespace.
cd /db05/oradata/JC2INT
mv temp05.dbf /backup/JC_TMP
SQL> startup mount;
ORACLE instance started.
Total System Global Area 2137886720 bytes
Fixed Size 2254952 bytes
Variable Size 637536152 bytes
Database Buffers 1493172224 bytes
Redo Buffers 4923392 bytes
Database mounted.
SQL> SELECT name FROM v$tempfile;
NAME
--------------------------------------------------------------------------------
/db05/oradata/JC2INT/temp05.dbf
/db05/oradata/JC2INT/temp04.dbf
/db05/oradata/JC2INT/temp03.dbf
/db05/oradata/JC2INT/temp02.dbf
/db05/oradata/JC2INT/TEMP01.dbf
SQL> ALTER DATABASE RENAME FILE '/db05/oradata/JC2INT/temp05.dbf' TO '/backup/JC_TMP/temp05.dbf';
Database altered.
SQL> SELECT name FROM v$tempfile;
NAME
--------------------------------------------------------------------------------
/backup/JC_TMP/temp05.dbf
/db05/oradata/JC2INT/temp04.dbf
/db05/oradata/JC2INT/temp03.dbf
/db05/oradata/JC2INT/temp02.dbf
/db05/oradata/JC2INT/TEMP01.dbf
SQL> alter database open;
Database altered.
Linux-x86_64 Error: 28: No space left on device
ORA-63999: data file suffered media failure
ORA-01114: IO error writing block to file 14 (block # 2039938)
ORA-01110: data file 14: '/db05/oradata/JC2INT/vino_data05.dbf'
This is due to hidden parameter _datafile_write_errors_crash_instance . Due to this ANY write error to a datafile will cause the instance to abort.
We need to increase the storage.As a temporary solution, I thought to move tempfiles to another mountpoint and tried to start the database.
In our case DB is already down.Then we need to move the temp files from /db05 to /backup mountpoint to get some freespace.
cd /db05/oradata/JC2INT
mv temp05.dbf /backup/JC_TMP
SQL> startup mount;
ORACLE instance started.
Total System Global Area 2137886720 bytes
Fixed Size 2254952 bytes
Variable Size 637536152 bytes
Database Buffers 1493172224 bytes
Redo Buffers 4923392 bytes
Database mounted.
SQL> SELECT name FROM v$tempfile;
NAME
--------------------------------------------------------------------------------
/db05/oradata/JC2INT/temp05.dbf
/db05/oradata/JC2INT/temp04.dbf
/db05/oradata/JC2INT/temp03.dbf
/db05/oradata/JC2INT/temp02.dbf
/db05/oradata/JC2INT/TEMP01.dbf
SQL> ALTER DATABASE RENAME FILE '/db05/oradata/JC2INT/temp05.dbf' TO '/backup/JC_TMP/temp05.dbf';
Database altered.
SQL> SELECT name FROM v$tempfile;
NAME
--------------------------------------------------------------------------------
/backup/JC_TMP/temp05.dbf
/db05/oradata/JC2INT/temp04.dbf
/db05/oradata/JC2INT/temp03.dbf
/db05/oradata/JC2INT/temp02.dbf
/db05/oradata/JC2INT/TEMP01.dbf
Database altered.
_datafile_write_errors_crash_instance
If _datafile_write_errors_crash_instance = TRUE (default in 11gR2 & after) then any write to a datafile which fails due to an IO error causes an instance crash.
If _datafile_write_errors_crash_instance = FALSE(default before 11gR2) then the behaviour reverts to the previous behaviour (before this fix) such that a write error to a datafile offlines the file if the DB is in archivelog mode and the file is not in system tablespace. If DB in noarchivelog mode , instance will be still aborted.
If _datafile_write_errors_crash_instance = FALSE(default before 11gR2) then the behaviour reverts to the previous behaviour (before this fix) such that a write error to a datafile offlines the file if the DB is in archivelog mode and the file is not in system tablespace. If DB in noarchivelog mode , instance will be still aborted.
Monday, July 25, 2016
APEX Images not rendered after refresh
We have refreshed one of our database which is being used by apex applications.We used to store most of the images in the file system.Some of the images are being stored in database.After refresh,when we tried to access application ,apex images which are stored in database is not displayed .
Solution :
We need to reset the password (as same as before refresh) and unlock the below user accounts.
APEX_PUBLIC_USER
APEX_REST_PUBLIC_USER
APEX_LISTENER
Solution :
We need to reset the password (as same as before refresh) and unlock the below user accounts.
APEX_PUBLIC_USER
APEX_REST_PUBLIC_USER
APEX_LISTENER
Tuesday, June 28, 2016
ORACLE_HOME Permissions Required to discover our DB Listener in Enterprise Manager(EM)
Today We have given the database details to EM Team to discover our database & listener.They have discovered the database,but EM had shown DB Listener as down even-though it is up.So when we troubleshooting, we found em agent need permission for few lib & mesg files.
$ chmod 775 $ORACLE_HOME/network/mesg
$ cd $ORACLE_HOME/network/mesg
$ chmod 666 *
$ chmod 775 $ORACLE_HOME/network/lib
$ cd $ORACLE_HOME/network/lib
$ chmod 666 *
$ cd $ORACLE_HOME/lib
$ chmod 644 *
P.S:Please make sure we are using proper umask (umask 022 )when we install oracle_home.so that we can avoid this issue.
$ chmod 775 $ORACLE_HOME/network/mesg
$ cd $ORACLE_HOME/network/mesg
$ chmod 666 *
$ chmod 775 $ORACLE_HOME/network/lib
$ cd $ORACLE_HOME/network/lib
$ chmod 666 *
$ cd $ORACLE_HOME/lib
$ chmod 644 *
P.S:Please make sure we are using proper umask (umask 022 )when we install oracle_home.so that we can avoid this issue.
Import Schema from one Tablespace to another Tablespace
We had a requirement to move schema from one tablespace to another.We can achieve this using expdp & impdp.
Current tablespace New Tablespace
IAM_LOB JAY_LOB
IAM_TABLE JAY_TABLE
IAM_INDEX JAY_INDEX
Create a parameter file with below parameters
oracle:JAYDEV> more para.par
DIRECTORY=JAY_DIR
SCHEMAS=JAY
DUMPFILE=JAY_EXPORT_%U.dmp
FILESIZE=8G
LOGFILE=EXPORT_JAY.log
JOB_NAME=JAY_EXPORT
expdp "'/as sysdba'" parfile=para.par
SQL> select tablespace_name from dba_ts_quotas where username='JAY';
TABLESPACE_NAME
------------------------------
IAM_LOB
IAM_TABLE
IAM_INDEX
select count(*) from dba_objects where owner='JAY';
COUNT(*)
----------
846
Now drop the user jay & tablespaces.
drop user jay cascade;
drop the old tablespaces.
SQL> select count(*) from dba_tables where tablespace_name like 'IAM%';
COUNT(*)
----------
0
SQL> select count(*) from dba_segments where tablespace_name like 'IAM%';
COUNT(*)
----------
0
SQL> select count(*) from dba_indexes where tablespace_name like 'IAM%';
COUNT(*)
----------
0
SQL> DROP TABLESPACE IAM_INDEX INCLUDING CONTENTS AND DATAFILES;
Tablespace dropped.
SQL> DROP TABLESPACE IAM_LOB INCLUDING CONTENTS AND DATAFILES;
Tablespace dropped.
SQL> DROP TABLESPACE IAM_TABLE INCLUDING CONTENTS AND DATAFILES;
Tablespace dropped.
IMPORT SCHEMA USING IMPDP (remap_tablespace):
Create a parameter file with below parameters .
oracle:JAYDEV> more imp.par
DIRECTORY=JAY_DIR
DUMPFILE=JAY_EXPORT_%U.dmp
REMAP_TABLESPACE=IAM_TABLE:JAY_TABLE,IAM_INDEX:JAY_INDEX,IAM_LOB:JAY_LOB
LOGFILE=IMPORT_VEMTEST.log
JOB_NAME=JAY_IMPORT
impdp "'/as sysdba'" parfile=imp.par
verify the object_count
Current tablespace New Tablespace
IAM_LOB JAY_LOB
IAM_TABLE JAY_TABLE
IAM_INDEX JAY_INDEX
EXPORT SCHEMA USING EXPDP:
Create a parameter file with below parameters
oracle:JAYDEV> more para.par
DIRECTORY=JAY_DIR
SCHEMAS=JAY
DUMPFILE=JAY_EXPORT_%U.dmp
FILESIZE=8G
LOGFILE=EXPORT_JAY.log
JOB_NAME=JAY_EXPORT
expdp "'/as sysdba'" parfile=para.par
SQL> select tablespace_name from dba_ts_quotas where username='JAY';
TABLESPACE_NAME
------------------------------
IAM_LOB
IAM_TABLE
IAM_INDEX
----------
846
Now drop the user jay & tablespaces.
drop user jay cascade;
drop the old tablespaces.
SQL> select count(*) from dba_tables where tablespace_name like 'IAM%';
COUNT(*)
----------
0
SQL> select count(*) from dba_segments where tablespace_name like 'IAM%';
COUNT(*)
----------
0
SQL> select count(*) from dba_indexes where tablespace_name like 'IAM%';
COUNT(*)
----------
0
SQL> DROP TABLESPACE IAM_INDEX INCLUDING CONTENTS AND DATAFILES;
Tablespace dropped.
SQL> DROP TABLESPACE IAM_LOB INCLUDING CONTENTS AND DATAFILES;
Tablespace dropped.
SQL> DROP TABLESPACE IAM_TABLE INCLUDING CONTENTS AND DATAFILES;
Tablespace dropped.
IMPORT SCHEMA USING IMPDP (remap_tablespace):
Create a parameter file with below parameters .
oracle:JAYDEV> more imp.par
DIRECTORY=JAY_DIR
DUMPFILE=JAY_EXPORT_%U.dmp
REMAP_TABLESPACE=IAM_TABLE:JAY_TABLE,IAM_INDEX:JAY_INDEX,IAM_LOB:JAY_LOB
LOGFILE=IMPORT_VEMTEST.log
JOB_NAME=JAY_IMPORT
impdp "'/as sysdba'" parfile=imp.par
verify the object_count
Monday, June 27, 2016
Revoking RESOURCE/DBA role removes UNLIMITED TABLESPACE (Grant)
if you grant a user RESOURCE or DBA role, the user then also has the UNLIMITED TABLESPACE privilege. OK, that's fine. However, if you revoke either RESOURCE or DBA, the UNLIMITED TABLESPACE priv is also revoked, even if it was given indepently in a grant statement.
SQL> ALTER USER JAY QUOTA UNLIMITED ON JAY_INDEX;
User altered.
SQL> select tablespace_name from dba_ts_quotas where username='JAY';
TABLESPACE_NAME
------------------------------
JAY_INDEX
SQL> grant dba to JAY;
Grant succeeded.
SQL> revoke dba from JAY;
Revoke succeeded.
SQL> select tablespace_name from dba_ts_quotas where username='JAY';
no rows selected
Sunday, June 26, 2016
How to send email from APEX?
Command to Send EMail:
DECLARE
l_id number;
BEGIN
l_id := APEX_MAIL.SEND (p_to => 'srm.jay@gmail.edu',
p_from => 'srm.jay@stanford.edu',
p_subj => 'APEX_MAIL from imguat',
p_body => 'Please review the attachment.',
p_body_html => '<b>Please</b> review the attachment');
COMMIT;
END;
/
begin
APEX_MAIL.PUSH_QUEUE('localhost','25');
end;
/
Please check the apex_mail_log table for any errors or warnings.
select mail_send_error from APEX_MAIL_LOG ;
DECLARE
l_id number;
BEGIN
l_id := APEX_MAIL.SEND (p_to => 'srm.jay@gmail.edu',
p_from => 'srm.jay@stanford.edu',
p_subj => 'APEX_MAIL from imguat',
p_body => 'Please review the attachment.',
p_body_html => '<b>Please</b> review the attachment');
COMMIT;
END;
/
To send a mail immediately from the apex mail queue:
Oracle Application Express logs successfully submitted message in the table APEX_MAIL_LOG with the timestamp reflecting your server's local time.
APEX_MAIL.PUSH_QUEUE('localhost','25');
end;
/
Please check the apex_mail_log table for any errors or warnings.
select mail_send_error from APEX_MAIL_LOG ;
Friday, June 24, 2016
How to Export DB_LINKS
We used to refresh our DEV database from production.so we need to preserve the DEV dblinks before we start refresh . we need to restore after refresh.
We can export the all the dblinks by using expdp.
Please create a para.par file with below commands.
full=y
INCLUDE=DB_LINK:"IN(SELECT db_link FROM dba_db_links)"
Then execute below command to export
expdp "'/as sysdba'" directory=DATA_PUMP_DIR dumpfile=dblink-restore_JAYDEV.dmp logfile=dblink-restore_JAYDEV.log parfile=para.par
After Refresh,Please execute below impdp command to recreate dblinks.
Note: DBLINK we are going to import should not exist in the target database(DEV).please drop dblinks if it already exists.
impdp "'/as sysdba'" directory=DATA_PUMP_DIR dumpfile=dblink-restore_JAYDEV.dmp logfile=dblink_restore_JAYDEV.log
ALTERNATE METHOD:
Before Refresh,We can take ddl of dblinks by using below query and spool it. Then we can connect to each schema and recreate the dblinks after refresh.
set long 100000 head off
spool exp_dblinks.dat
SELECT DBMS_METADATA.GET_dDL('DB_LINK', DB_LINK,'PUBLIC') FROM DBA_DB_LINKS WHERE OWNER = 'PUBLIC';
SELECT DBMS_METADATA.GET_dDL('DB_LINK', DB_LINK,'XXRA') FROM DBA_DB_LINKS WHERE OWNER = 'JAY';
We can export the all the dblinks by using expdp.
Please create a para.par file with below commands.
full=y
INCLUDE=DB_LINK:"IN(SELECT db_link FROM dba_db_links)"
Then execute below command to export
expdp "'/as sysdba'" directory=DATA_PUMP_DIR dumpfile=dblink-restore_JAYDEV.dmp logfile=dblink-restore_JAYDEV.log parfile=para.par
After Refresh,Please execute below impdp command to recreate dblinks.
Note: DBLINK we are going to import should not exist in the target database(DEV).please drop dblinks if it already exists.
impdp "'/as sysdba'" directory=DATA_PUMP_DIR dumpfile=dblink-restore_JAYDEV.dmp logfile=dblink_restore_JAYDEV.log
ALTERNATE METHOD:
Before Refresh,We can take ddl of dblinks by using below query and spool it. Then we can connect to each schema and recreate the dblinks after refresh.
set long 100000 head off
spool exp_dblinks.dat
SELECT DBMS_METADATA.GET_dDL('DB_LINK', DB_LINK,'PUBLIC') FROM DBA_DB_LINKS WHERE OWNER = 'PUBLIC';
SELECT DBMS_METADATA.GET_dDL('DB_LINK', DB_LINK,'XXRA') FROM DBA_DB_LINKS WHERE OWNER = 'JAY';
Wednesday, June 22, 2016
Steps to Create Incident Rule in Oracle EM For Linux Host Monitoring.
In our environment,Already we had a default template which will be applied to all the unix hosts.so currently As a DBA,we have to know the server status (up or down),High cpu & memory alerts,disk space alerts.so for that,we need to create rule.
Note:I haven't provided steps on how to create template and apply to the target.I have just given steps to apply a rule for the servers where metrics has been collected by a template.In future,I will provide the steps to create a template as well.
Setup -> Incidents -> Incident Rules
Create Rule Set
Name: JAY_DEV_HOST_MONITORING
Applies to: Targets
Targets tab:
Specific targets radio button selected
Add Groups Click on +Add icon
Target type: Group
Target Name: g_jay_dev_host_unix
under Rules Tab
click on Create
Select: Incoming events and updates to events
Checkbox: Type: Metric Alert
All events of type Metric Alert
Advanced Selection Options
checkbox: Severity In Critical;Warning;
Next
Create New Rule - Add Actions
Click Add
Leave everything to default
Under Advanced Notifications
Select checkbox for
provide email address
Continue
Click Next
Name:METRIC_RULE_ALERT
Click Next
Click Continue
Click OK
Click Save
under Rules Tab
click on Create
Select: Incoming events and updates to events
Checkbox: Type: Target Availablity
Specific events of type Target Availability
click on Add
select Target type as Host
choose Agent unreachable,Agent Back Up from Down,Down
click OK
Next
Create New Rule - Add Actions
Click Add
Leave everything to default
Under Advanced Notifications
Select checkbox for
provide email address
Under Repeat Notifications
Mark the checkbox for Send Repeat notifications
Continue
Click Next
Name:Target_Availablity_Status
Click Next
Click Continue
Click OK
Click Save
Note:I haven't provided steps on how to create template and apply to the target.I have just given steps to apply a rule for the servers where metrics has been collected by a template.In future,I will provide the steps to create a template as well.
Setup -> Incidents -> Incident Rules
Create Rule Set
Name: JAY_DEV_HOST_MONITORING
Applies to: Targets
Targets tab:
Specific targets radio button selected
Add Groups Click on +Add icon
Target type: Group
Target Name: g_jay_dev_host_unix
under Rules Tab
click on Create
Select: Incoming events and updates to events
Checkbox: Type: Metric Alert
All events of type Metric Alert
Advanced Selection Options
checkbox: Severity In Critical;Warning;
Next
Create New Rule - Add Actions
Click Add
Leave everything to default
Under Advanced Notifications
Select checkbox for
provide email address
Continue
Click Next
Name:METRIC_RULE_ALERT
Click Next
Click Continue
Click OK
Click Save
under Rules Tab
click on Create
Select: Incoming events and updates to events
Checkbox: Type: Target Availablity
Specific events of type Target Availability
click on Add
select Target type as Host
choose Agent unreachable,Agent Back Up from Down,Down
click OK
Next
Create New Rule - Add Actions
Click Add
Leave everything to default
Under Advanced Notifications
Select checkbox for
provide email address
Under Repeat Notifications
Mark the checkbox for Send Repeat notifications
Continue
Click Next
Name:Target_Availablity_Status
Click Next
Click Continue
Click OK
Click Save
Monday, June 13, 2016
Steps to Create Physical Standby Database
Creating a Physical Standby Database:
On Primary Database:
a. Please make sure primary database is ARCHIVELOG enabled.
SQL> SELECT LOG_MODE FROM SYS.V$DATABASE;
LOG_MODE
------------
NOARCHIVELOG
SQL> select name from v$database;
NAME
---------
PROD
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.
Total System Global Area 726540288 bytes
Fixed Size 2230040 bytes
Variable Size 469764328 bytes
Database Buffers 251658240 bytes
Redo Buffers 2887680 bytes
Database mounted.
SQL> alter database archivelog;
Database altered.
SQL> alter database open;
Database altered.
SQL> select log_mode from v$database;
LOG_MODE
------------
ARCHIVELOG
b. Please make sure primary database is FORCE LOGGING is enabled
SQL> SELECT force_logging FROM v$database;
FOR
---
NO
SQL> ALTER DATABASE force logging;
Database altered.
SQL> SELECT force_logging FROM v$database;
FOR
---
YES
c. Please Set the following Initialization Parameters on the Primary Database:
ALTER SYSTEM SET log_archive_dest_1='location=/archive/prod/' SCOPE=both;
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='service=stby_db async valid_for=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=stby_db' SCOPE=both ;
ALTER SYSTEM SET log_archive_dest_state_2 ='defer';
ALTER SYSTEM SET log_archive_config= 'dg_config=(prim_db,stby_db)';
ALTER SYSTEM SET log_archive_max_processes=8;
ALTER SYSTEM SET db_unique_name = prim_db scope=spfile;
Recycle the database.
Those Parameters prepare the Primary Database to support a Standby Database 'stby_db'. log_archive_dest_2is responsible for transferring Redo to the Standby Site later – currently it is deferred, we will enable later once theStandby Database exists. log_archive_config records all db_unique_name's participating in this Configuration.Log transport will not be allowed to any database not on the list
d.Create a PFILE from the Primary SPFILE in the Format 'init<Standby SID>.ora'
create pfile='/tmp/initstby_db.ora' from spfile;
e.Add the tns entry for both primary and standby databases in primary tnsnames.ora
f.Copy prepared 'initstby_db.ora' together with the Passwordfile of the Primary Database (orapwprim_db in$ORACLE_HOME/dbs) to the Standby Site and place both into '$ORACLE_HOME/dbs'-Folder. Then you have to rename the Passwordfile to match with the Standby SID:
mv orapwprim_db orapwstby_db
On Standby database:
a.Adjust the values in initstby_db.ora
*.log_archive_dest_1='location=/archive/standby/''
*.log_archive_config= 'dg_config=(prim_db,stby_db)'
*.log_archive_max_processes = 8
*.fal_server='prim_db'
*.log_file_name_convert = '/u01/app/oracle/oradata/PROD','/u01/app/oracle/oradata/STANDBY'
*.db_file_name_convert = '/u01/app/oracle/oradata/PROD','/u01/app/oracle/oradata/STANDBY'
*.db_unique_name='stby_db'
log_archive_dest_1 --> here defines the local Archive Destination where we will put ArchiveLogs arriving from the Primary Database later.
log_archive_config and db_unique_name are required for the Data Guard Configuration (like on the Primary)
fal_server is required for Gap Resolution to be able to automatically fetch Archive Logs from the Primary again in Case there is a Gap.
To automate the Substitution of Database File Locations we set db_file_name_convert and log_file_name_convert.
You may also adjust audit_file_dest and control_files to match with your Environment and Locations.
Create the Physical Standby Database
In the previous Section we prepared the Environments and the Primary Database to support a Standby Database.Now we can proceed and create the Physical Standby Database itself. There are 3 common Ways to perform thiswhich will be shown in the next Section. You can choose any Way most suitable for you:
1. Creating manually via User-Managed Backups
2. Using RMAN Backup-based Duplication
3. Creating a Standby Database from the active Primary Database without a Backup using RMAN Duplicate.
On Primary Database:
If you want to use a Hot Backup, we have to put the Primary Database into Backup Mode and copy the Datafiles:
SQL> alter database begin backup;
$ cp /oracle/oradata/prim_db/*.dbf /backup/
SQL> alter database end backup;
We can create the Standby Controlfile since the Backup is complete (either Hot or Cold Backup)
SQL> alter database create standby controlfile as '/tmp/control01.ctl';
Copy the data and control files to standby datafile location.
On Standby Database:
$ export ORACLE_SID = stby_dbSQL> connect / as sysdba
SQL> startup mount
Post steps:
we prepared and created the Physical Standby Database. Now we can start the Log Transport and Log Apply Services to have the Data Guard Environment completely operational.
First of all we should now add Standby RedoLogs to the new created Standby Database to collect the current Redo arriving from the Primary Database. We can add those using
On Standby:
SQL> alter database add standby logfile <group #> ('<Member>') size <size>;To verify if Standby RedoLogs get allocated on the Standby Database
select * from v$standby_log;
On Primary:
Next we can enable Log Transport Services on the Primary Database which have been prepared initially – performing a Logfile Switch afterwards will finally enable it (all changes to log_archive_dest_n and log_archive_dest_state_n become active after the next Log Switch once set)SQL> alter system set log_archive_dest_state_2 = 'enable' scope=both;
SQL> alter system switch logfile;
On Standby:
alter database recover managed standby database using current logfile disconnect;Query v$managed_standby to monitor the Progress of the Managed Recovery:
SQL> select * from v$managed_standby where process = 'MRP0';
FAL_SERVER specifies the FAL (fetch archive log) server for a standby database. The value is an Oracle Net service name, which is assumed to be configured properly on the standby database system to point to the desired FAL server.
FAL_CLIENT specifies the FAL (fetch archive log) client name that is used by the FAL service, configured through theFAL_SERVERp arameter, to refer to the FAL client. The value is an Oracle Net service name, which is assumed to be configured properly on the FAL server system to point to the FAL client (standby database). Given the dependency of FAL_CLIENT on FAL_SERVER, the two parameters should be configured or changed at the same time.
At primary:
fal_server = ' '
fal_client = ' '
Primary will never will have gap, so no need for any fal* parameter here.
At remote standby database:
fal_server = 'prim_db'
fal_client = 'stby_db'
Remote standby when has gap, can get the archive logs from the primary database . Hence the fal_server parameter. It wants the primary to send the FAL request response to 'stby_db', hence fal_client setting.
In case if there is a gap in stby_db,it will request fal_server prim_db. Primary will send the fal_client stby_db.
Tuesday, June 7, 2016
Access Control List(ACL) in Oracle 11G
With ACL’s, Oracle offers more fine-grained access control for users to access external network resources.
The packages UTL_MAIL, UTL_SMTP, UTL_HTTP, UTL_TCP etc. allow communication beyond the database server to the outside world, but when granted access, all hosts can be accessed. This can be interpreted as a security flaw as no login is required when using UTL_TCP for example. DBA’s are advised to revoke the execute privileges from public on these kind of packages.
Since Oracle 11g, the Access Control List is introduced. You not only can control who has access to these packages by granting, but now you can also control which resources they can call.We can control accessibility based on host and port number.
Create ACL and assign host,port and privileges.
1.
BEGIN
DBMS_NETWORK_ACL_ADMIN.create_acl (
acl => 'jaydba.xml',
description => 'jaydba',
principal => 'APPS',
is_grant => TRUE,
privilege => 'connect',
start_date => SYSTIMESTAMP,
end_date => NULL);
COMMIT;
END;
/
2.
BEGIN
DBMS_NETWORK_ACL_ADMIN.assign_acl (
acl => 'jaydba.xml',
host => 'jaydba.blogspot.com',
lower_port => 443,
upper_port => 443);
END;
/
3.
BEGIN
DBMS_NETWORK_ACL_ADMIN.add_privilege (
acl => 'jaydba.xml',
principal => 'APPS',
is_grant => FALSE,
privilege => 'resolve',
position => NULL,
start_date => NULL,
end_date => NULL);
COMMIT;
END;
/
4.Drop ACL.
begin
DBMS_NETWORK_ACL_ADMIN.drop_acl (acl => 'jaydba.xml');
COMMIT;
END;
/
5.Unassign ACL:
begin
dbms_network_acl_admin.unassign_acl(
acl => 'jaydba.xml',
host => 'jaydba.blogspot.com',
lower_port => 443,
upper_port => 443
);
end;
/
6.Delete Privilege:
begin
dbms_network_acl_admin.delete_privilege('jaydba.xml', 'APPS', NULL, 'connect');
end;
/
In Case ,if we would like to change host name or port ,we just have to unassign and assign(5 & 2),No need to drop acl and recreate.
Accessing https sites:
IF the externeal network we try access is ssl protected,then we need to create the wallet and import the certificate,pls make sure to import the certificate chains properly(main,root,intermediate certs)
orapki wallet create -wallet /home/oracle/wallet -pwd password123 -auto_login
orapki wallet add -wallet /home/oracle/wallet -trusted_cert -cert verisignclass3.cer -pwd password123
orapki wallet add -wallet /home/oracle/wallet -trusted_cert -cert www.verisign.com.cer -pwd password123
orapki wallet add -wallet /home/oracle/wallet -trusted_cert -cert jaydba.cer -pwd password123
orapki wallet display -wallet /home/oracle
To verify:
select UTL_HTTP.request('url',proxy,'wallet',wallet password) from dual;
proxy -- (Optional) Specifies a proxy server to use when making the HTTP request
wallet_password -- (Optional) Specifies the password required to open the wallet.
ex:
select UTL_HTTP.request('https://jaydba.blogspot.com',null,'file:/home/oracle/wallet ',null) from dual;
Tuesday, May 31, 2016
Install Oracle Database 11.2.0.2 on OEL6
a. Copy Software to /stage
b. Minimum Hardware Requirements
Ensure that your system meets the following Physical Memory requirements:
Minimum: 1 GB of RAM
Recommended: 2 GB of RAM or more
c.Swap disk space proportional to the system's physical memory as follows:
RAM Swap Space
Between 1 GB and 2 GB 1.5 times the size of RAM
Between 2 GB and 16 GB Equal to the size of RAM
More than 16 GB 16 GB
NOTE: The above recommendations (from the Oracle® Database Installation Guide 12c Release 1 (12.1) for Linux) are MINIMUM recommendations for installations. Further RAM and swap space may be required to tune/improve RDBMS performance.
d. 1.0 GB (1024MB) of disk space (and less than 2TB of disk space) in the /tmp directory.
If the free space available in the /tmp directory is less than what is required, then complete one of the following steps:
Delete unnecessary files from the /tmp directory to meet the disk space requirement.
Set the TMP and TMPDIR environment variables when setting the oracle user's environment.
e. Approximately 6.5 GB of local disk space for the Database Software Files.
f.Install oracle-validated rpm which will take care of all pre-req steps.
yum install oracle-validated
1.It will install all required rpms.
2.change kernel parameters.
3.change /etc/security/limits.conf flle.
g.Make sure umask is 022.
h.Verify the latest version of PAM is loaded, then add or edit the following line in the /etc/pam.d/login file, if it does not already exist:
session required pam_limits.so
i. Start Run installer
Issues faced while installing :
11.2.0.2: The installer should only show a single "missing package" failure for the "pdksh" package. It can be ignored because we installed the "ksh" package in its place.
j.Run root.sh and verify by logging in database
b. Minimum Hardware Requirements
Ensure that your system meets the following Physical Memory requirements:
Minimum: 1 GB of RAM
Recommended: 2 GB of RAM or more
c.Swap disk space proportional to the system's physical memory as follows:
RAM Swap Space
Between 1 GB and 2 GB 1.5 times the size of RAM
Between 2 GB and 16 GB Equal to the size of RAM
More than 16 GB 16 GB
NOTE: The above recommendations (from the Oracle® Database Installation Guide 12c Release 1 (12.1) for Linux) are MINIMUM recommendations for installations. Further RAM and swap space may be required to tune/improve RDBMS performance.
d. 1.0 GB (1024MB) of disk space (and less than 2TB of disk space) in the /tmp directory.
If the free space available in the /tmp directory is less than what is required, then complete one of the following steps:
Delete unnecessary files from the /tmp directory to meet the disk space requirement.
Set the TMP and TMPDIR environment variables when setting the oracle user's environment.
e. Approximately 6.5 GB of local disk space for the Database Software Files.
f.Install oracle-validated rpm which will take care of all pre-req steps.
yum install oracle-validated
1.It will install all required rpms.
2.change kernel parameters.
3.change /etc/security/limits.conf flle.
g.Make sure umask is 022.
h.Verify the latest version of PAM is loaded, then add or edit the following line in the /etc/pam.d/login file, if it does not already exist:
session required pam_limits.so
i. Start Run installer
Issues faced while installing :
11.2.0.2: The installer should only show a single "missing package" failure for the "pdksh" package. It can be ignored because we installed the "ksh" package in its place.
j.Run root.sh and verify by logging in database
Thursday, May 12, 2016
Steps to Refresh RAC Database using RMAN Backup
Environment Details:
Source : (Two Node RAC)
Database : RJ1PRD
ASM DG : +RJ1PRD_DG
Target: (Two Node RAC)
Database : RJ1UAT
ASM DG : +RJ1UAT_DG
1. Copy the RMAN backup pieces to Target Database Server.
2. Take a backup of db_link and proxy users etc...from Target Database.
3. Shut-down the Target Database RJ1UAT .(Both instances)
4. Please Add below *convert* parameters and modify cluster_database parameter from true to false on first node.
Ex:
# added for cloning
db_file_name_convert=('+RJ1PRD_DG','+RJ1UAT_DG')
log_file_name_convert=('+RJ1PRD_DG','+RJ1UAT_DG')
*.cluster_database=FALSE
5. Start the database on first node.
6. Make sure we have enough space on Target ASM diskgroup.
SELECT name, total_mb/1024/1024 TOTAL_TB,free_mb/1024/1024 FREE_TB, free_mb/total_mb*100 as percentage FROM v$asm_diskgroup where name='RJ1UAT_DG';
7.Drop Target Database.
shurtdown the database and startup mount in restrict mode.
SQL> startup mount exclusive restrict
ORACLE instance started.
Total System Global Area 2.9931E+10 bytes
Fixed Size 2267792 bytes
Variable Size 1.2885E+10 bytes
Database Buffers 1.6979E+10 bytes
Redo Buffers 64827392 bytes
Database mounted.
SQL> select logins,parallel from v$instance;
LOGINS PAR
---------- ---
RESTRICTED NO
SQL> select name from v$database;
NAME
---------
RJ1UAT
SQL> drop database;
Database dropped.
Then make sure all the data& control files are deleted properly or not.you can check the free space using below query .
SELECT name, total_mb/1024/1024 TOTAL_TB,free_mb/1024/1024 FREE_TB, free_mb/total_mb*100 as percentage FROM v$asm_diskgroup where name='RJ1UAT_DG';
8.Startup nomount and issue duplicate command on first node.
startup nomount;
Save the file restore.rman with below commands.
run
{
allocate auxiliary channel ch1 type disk;
allocate auxiliary channel ch2 type disk;
allocate auxiliary channel ch3 type disk;
allocate auxiliary channel ch4 type disk;
allocate auxiliary channel ch5 type disk;
allocate auxiliary channel ch6 type disk;
allocate auxiliary channel ch7 type disk;
allocate auxiliary channel ch8 type disk;
duplicate database to RJ1UAT
BACKUP LOCATION '/backup/RMAN_BACKUPS/Sat/'
logfile
group 1 ('+RJ1UAT_DG', '+RJ1UAT_DG') size 500M REUSE,
group 2 ('+RJ1UAT_DG', '+RJ1UAT_DG') size 500M REUSE,
group 3 ('+RJ1UAT_DG', '+RJ1UAT_DG') size 500M REUSE,
group 4 ('+RJ1UAT_DG', '+RJ1UAT_DG') size 500M REUSE,
group 5 ('+RJ1UAT_DG', '+RJ1UAT_DG') size 500M REUSE
;
}
rman auxiliary / nocatalog log=/backup/RMAN_BACKUPS/logs/RJ1UAT.log
Source : (Two Node RAC)
Database : RJ1PRD
ASM DG : +RJ1PRD_DG
Target: (Two Node RAC)
Database : RJ1UAT
ASM DG : +RJ1UAT_DG
1. Copy the RMAN backup pieces to Target Database Server.
2. Take a backup of db_link and proxy users etc...from Target Database.
3. Shut-down the Target Database RJ1UAT .(Both instances)
4. Please Add below *convert* parameters and modify cluster_database parameter from true to false on first node.
Ex:
# added for cloning
db_file_name_convert=('+RJ1PRD_DG','+RJ1UAT_DG')
log_file_name_convert=('+RJ1PRD_DG','+RJ1UAT_DG')
*.cluster_database=FALSE
5. Start the database on first node.
6. Make sure we have enough space on Target ASM diskgroup.
SELECT name, total_mb/1024/1024 TOTAL_TB,free_mb/1024/1024 FREE_TB, free_mb/total_mb*100 as percentage FROM v$asm_diskgroup where name='RJ1UAT_DG';
7.Drop Target Database.
shurtdown the database and startup mount in restrict mode.
SQL> startup mount exclusive restrict
ORACLE instance started.
Total System Global Area 2.9931E+10 bytes
Fixed Size 2267792 bytes
Variable Size 1.2885E+10 bytes
Database Buffers 1.6979E+10 bytes
Redo Buffers 64827392 bytes
Database mounted.
SQL> select logins,parallel from v$instance;
LOGINS PAR
---------- ---
RESTRICTED NO
SQL> select name from v$database;
NAME
---------
RJ1UAT
SQL> drop database;
Database dropped.
Then make sure all the data& control files are deleted properly or not.you can check the free space using below query .
SELECT name, total_mb/1024/1024 TOTAL_TB,free_mb/1024/1024 FREE_TB, free_mb/total_mb*100 as percentage FROM v$asm_diskgroup where name='RJ1UAT_DG';
8.Startup nomount and issue duplicate command on first node.
startup nomount;
Save the file restore.rman with below commands.
run
{
allocate auxiliary channel ch1 type disk;
allocate auxiliary channel ch2 type disk;
allocate auxiliary channel ch3 type disk;
allocate auxiliary channel ch4 type disk;
allocate auxiliary channel ch5 type disk;
allocate auxiliary channel ch6 type disk;
allocate auxiliary channel ch7 type disk;
allocate auxiliary channel ch8 type disk;
duplicate database to RJ1UAT
BACKUP LOCATION '/backup/RMAN_BACKUPS/Sat/'
logfile
group 1 ('+RJ1UAT_DG', '+RJ1UAT_DG') size 500M REUSE,
group 2 ('+RJ1UAT_DG', '+RJ1UAT_DG') size 500M REUSE,
group 3 ('+RJ1UAT_DG', '+RJ1UAT_DG') size 500M REUSE,
group 4 ('+RJ1UAT_DG', '+RJ1UAT_DG') size 500M REUSE,
group 5 ('+RJ1UAT_DG', '+RJ1UAT_DG') size 500M REUSE
;
}
rman auxiliary / nocatalog log=/backup/RMAN_BACKUPS/logs/RJ1UAT.log
@restore.rman
9. Once the dupicate command completed,remove the below parameters from init.ora and change the cluster_database value on first node.
# Remove after cloning
db_file_name_convert=('+RJ1PRD_DG','+RJ1UAT_DG')
log_file_name_convert=('+RJ1PRD_DG','+RJ1UAT_DG')
#change after cloning
*.cluster_database=TRUE
Recycle the database
db_file_name_convert=('+RJ1PRD_DG','+RJ1UAT_DG')
log_file_name_convert=('+RJ1PRD_DG','+RJ1UAT_DG')
#change after cloning
*.cluster_database=TRUE
Recycle the database
10. Add the redo log thread for instacne 2.
alter database add logfile thread 2 group 6 ('+RJ1UAT_DG') size 500m reuse;
alter database add logfile thread 2 group 7 ('+RJ1UAT_DG') size 500m reuse;
alter database add logfile thread 2 group 8 ('+RJ1UAT_DG') size 500m reuse;
alter database add logfile thread 2 group 9 ('+RJ1UAT_DG') size 500m reuse;
alter database add logfile thread 2 group 10 ('+RJ1UAT_DG') size 500m reuse;
alter database enable public thread 2;
11.Then start the database on second node and verify gv$instance & Start the listener
Note : In case if you want to find out the time of restored data,you can grep "set until scn" from /backup/RMAN_BACKUPS/logs/RJ1UAT.log .Based on that SCN,you can find the timestamp.
select scn_to_timestamp(2880392)from dual;
Subscribe to:
Posts (Atom)
How to Compile Forms , Reports & Custom.pll in R12.2
How to Compile Custom.pll cd $AU_TOP/resource cp CUSTOM.plx CUSTOM.plx_bkup cp CUSTOM.pll CUSTOM.pll_bkup frmcmp_batch module=CUSTOM.pl...
-
Count: Before Terraform 0.12.6, the only way to create multiple instances of the same resource was to use a count parameter. One o...
-
How to Compile Custom.pll cd $AU_TOP/resource cp CUSTOM.plx CUSTOM.plx_bkup cp CUSTOM.pll CUSTOM.pll_bkup frmcmp_batch module=CUSTOM.pl...
-
Wallets: A wallet is a password-protected container used to store authentication and signing credentials, including private keys, ce...