Oracle DNFS (Direct Network File System) is a solution provided by Oracle to access NFS located in NAS. It makes the integration of NFS and DB easier and optimal. This direct integration provides performance improvements with fast and scalable access. This article explains the activation of DNFS and the cloning of the Oracle DNFS database.
Activating DNFS (Direct Network File System)
Use the operating system to create the ZFS Storage, User and Group IDs.
Enable DNFS on database servers (on specific nodes in a Real Application Environment); this article describes how to open a backup taken from an RAC server consisting of 2 nodes, as a single instance.
1 2 |
cd $ORACLE_HOME/rdbms/lib make -f ins_rdbms.mk dnfs_on |
The actual transaction:
1 2 |
rm -f /u01/app/oracle/product/11.2.0.4/db_1/lib/libodm11.so; cp /u01/app/oracle/product/11.2.0.4/db_1/lib/libnfsodm11.so /u01/app/oracle/product/11.2.0.4/db_1/lib/libodm11.so |
#To disable the DNFS
1 |
make -f ins_rdbms.mk dnfs_off |
#Restart the database to view the relevant DNFS line within the alertlog
1 2 3 4 |
#mkdir /dnfs01 chown -R oracle:oinstall /dnfs01/ #mkdir /dnfs02 chown -R oracle:oinstall /dnfs02/ |
linux
#/etc/stab
1 2 |
172.16.1.222:/export/com1 /dnfs01 nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600 0 0 172.16.1.223:/export/com2 /dnfs02 nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600 0 0 |
#Solaris
/etc/vfstab
1 2 |
172.16.1.222:/export/com1 - /dnfs01 nfs - yes rw,bg,hard,rsize=131072,wsize=131072,vers=3,nointr,timeo=600,proto=tcp,noac 172.16.1.223:/export/com2 - /dnfs02 nfs - yes rw,bg,hard,rsize=131072,wsize=131072,vers=3,nointr,timeo=600,proto=tcp,noac |
# Mount options can vary according to the file type (dbf, redo, binary, rman backup).
Depending on the version, it is possible to mount files such as redo/datafile, vote, ocr on top of DNFS.
Example:
1 2 3 4 5 6 7 8 |
172.16.1.223:/export/data01 - /ZFSBackup/data01 nfs - no rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,llock,xattr 172.16.1.222:/export/data02 - /ZFSBackup/data02 nfs - no rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,llock,xattr 172.16.1.223:/export/redolog01 - /ZFSBackup/redolog01 nfs - no rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,llock,xattr 172.16.1.222:/export/redolog01 - /ZFSBackup/redolog02 nfs - no rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,llock,xattr 172.16.1.223:/export/Backup01 - /ZFSBackup/backup01 nfs - no rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,llock,xattr 172.16.1.222:/export/Backup01 - /ZFSBackup/backup01 nfs - no rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,llock,xattr 172.16.1.223:/export/undo02 - /ZFSBackup/undo02 nfs - no rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,llock,xattr 172.16.1.222:/export/undo01 - /ZFSBackup/undo01 nfs - no rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,llock,xattr |
1 |
$ORACLE_HOME/dbs/orafstab |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
#### Controller 1 server:172.16.1.222 local:172.16.1.200 path:172.16.1.222 local:172.16.1.201 path:172.16.1.222 local:172.16.1.202 path:172.16.1.222 local:172.16.1.203 path:172.16.1.222 dontroute export: /export/r12datap1 mount:/r12datap1 #### Controller 2 server:172.16.1.223 local:172.16.1.200 path:172.16.1.223 local:172.16.1.201 path:172.16.1.223 local:172.16.1.202 path:172.16.1.223 local:172.16.1.203 path:172.16.1.223 dontroute export: /export/r12datap2 mount:/r12datap2 |
Recommended reading:
How To Setup DNFS (Direct NFS) On Oracle Release 11.2 (Doc ID 1452614.1)
Mount Options for Oracle files when used with NFS on NAS devices (Doc ID 359515.1)
Step by Step – Configure Direct NFS Client (DNFS) on Linux (11g) (Doc ID 762374.1)
Clone your dNFS Production Database for Testing (Doc ID 1210656.1)
Recommended Patches for Direct NFS Client (Doc ID 1495104.1)
Direct NFS monitoring and v$views (Doc ID 1495739.1)
Direct NFS Frequently Asked Questions (Doc ID 1496040.1)
Direct NFS monitoring and v$views (Doc ID 1495739.1)
Solutions to possible problems:
Direct NFS: Failed to set socket buffer size.wtmax=[1048576] rtmax=[1048576], errno=-1
Database Alert Log entries: Direct NFS: Failed to set socket buffer size.wtmax=[1048576] rtmax=[1048576], errno=-1
FAQs related to Direct NFS (Doc ID 1496040.1)
Solaris
1 2 3 |
ndd -set /dev/tcp tcp_max_buf 1056768 ndd -set /dev/tcp tcp_xmit_hiwat 1056768 ndd -set /dev/tcp tcp_recv_hiwat 1056768 |
To save the settings:
/etc/inittab
1 2 3 |
tm::sysinit:/usr/sbin/ndd -set /dev/tcp tcp_max_buf 1056768 tm::sysinit:/usr/sbin/ndd -set /dev/tcp tcp_xmit_hiwat 1056768 tm::sysinit:/usr/sbin/ndd -set /dev/tcp tcp_recv_hiwat 1056768 |
1 2 3 4 5 6 |
ndd -set /dev/tcp tcp_conn_req_max_q 16384 ndd -set /dev/tcp tcp_conn_req_max_q0 16384 ndd -set /dev/tcp tcp_max_buf 4194304 ndd -set /dev/tcp tcp_cwnd_max 2097152 ndd -set /dev/tcp tcp_recv_hiwat 400000 ndd -set /dev/tcp tcp_xmit_hiwat 400000 |
To query
1 2 3 4 5 6 |
ndd -get tcp_conn_req_max_q ndd -get tcp_conn_req_max_q0 ndd -get tcp_max_buf ndd -get tcp_cwnd_max ndd -get tcp_recv_hiwat ndd -get tcp_xmit_hiwat |
Linux
Please check the values of the following parameters and bump up the max to be greater than 1056768.
1 2 3 4 |
cat /proc/sys/net/core/rmem_max cat /proc/sys/net/core/wmem_max cat /proc/sys/net/ipv4/tcp_rmem cat /proc/sys/net/ipv4/tcp_wmem |
Backup a DNFS
Making a backup for the clone
1 2 3 4 5 6 7 |
run { sql 'alter database begin backup'; set nocfau; backup as copy database format '/dnfs/1backup/d_%U_.dbf' ; sql 'alter database end backup'; } |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
run { sql 'alter system set "_backup_disk_bufcnt"=64 scope=memory' ; sql 'alter system set "_backup_file_bufcnt"=64 scope=memory' ; sql 'alter system set "_backup_disk_bufsz"=1048576 scope=memory' ; sql 'alter system set "_backup_file_bufsz"=1048576 scope=memory' ; configure device type disk parallelism 16 backup type to copy; allocate channel ch1 device type disk format '/backup01/rman/%U' ; allocate channel ch2 device type disk format '/backup01/rman/%U' ; allocate channel ch3 device type disk format '/backup01/rman/%U' ; allocate channel ch4 device type disk format '/backup01/rman/%U' ; allocate channel ch5 device type disk format '/backup01/rman/%U' ; allocate channel ch6 device type disk format '/backup01/rman/%U' ; allocate channel ch7 device type disk format '/backup01/rman/%U' ; allocate channel ch8 device type disk format '/backup01/rman/%U' ; allocate channel ch9 device type disk format '/backup02/rman/%U' ; allocate channel ch10 device type disk format '/backup02/rman/%U' ; allocate channel ch11 device type disk format '/backup02/rman/%U' ; allocate channel ch12 device type disk format '/backup02/rman/%U' ; allocate channel ch13 device type disk format '/backup02/rman/%U' ; allocate channel ch14 device type disk format '/backup02/rman/%U' ; allocate channel ch15 device type disk format '/backup02/rman/%U' ; allocate channel ch16 device type disk format '/backup02/rman/%U' ; backup as backupset incremental level 0 section size 64g database tag 'FullBackUpSet_L0' plus archivelog tag 'FullBackUpSet_L0'; } |
1 2 3 4 5 6 7 8 9 10 |
run { configure device type disk parallelism 4 backup type to copy; allocate channel ch1 device type disk format '/nfs/backup/prod1/%U' ; allocate channel ch2 device type disk format '/nfs/backup/prod1/%U' ; allocate channel ch3 device type disk format '/nfs/backup/prod1/%U' ; allocate channel ch4 device type disk format '/nfs/backup/prod1/%U' ; backup incremental level 1 for recover of copy with tag 'zfssa_clone' database reuse; recover copy of database with tag 'zfssa_clone'; } |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
#!/bin/bash # Full hot backup script #usage: <path>/export.sh SID <hotbkup path> # example: /u01/app/oracle/admin/scripts/imagebck.sh orcl1 /backup01/dnfs01/orcl1 # example: /u01/app/oracle/admin/scripts/imagebck.sh orcl1 /backup01/dnfs01/orcl1 > /backup01/dnfs01/rman.log # this script will exit if instance is not running on this server # ps -ef | grep -v grep | grep ora_pmon_$1 | wc -l | while read CONTROL do if [ "$CONTROL" -gt 0 ] ; then ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/db_1 export ORACLE_HOME PATH=$ORACLE_HOME/bin:$PATH:/bin:/usr/bin:usr/local/bin:. export PATH ORACLE_SID=$1 export ORACLE_SID=$1 BKUPLOC=$2; export BKUPLOC # rm $BKUPLOC/*.bkp # rm $BKUPLOC/*.bkp.gz rman target=/@BCKP catalog=<a href="mailto:target=/@BCKP catalog=/">/</a>@CAT log /dnfs/backup.log <<EOF sql 'alter system set "_backup_disk_bufcnt"=64 scope=memory' ; sql 'alter system set "_backup_file_bufcnt"=64 scope=memory' ; sql 'alter system set "_backup_disk_bufsz"=1048576 scope=memory' ; sql 'alter system set "_backup_file_bufsz"=1048576 scope=memory' ; configure device type disk parallelism 32 backup type to copy; run { sql 'alter database begin backup'; set nocfau; backup as copy database format '$BKUPLOC/d_%U_.dbf' ; sql 'alter database end backup'; } exit EOF fi done |
Cloning DNFS
The 2 node ASM Oracle 11.2.0.4 has been cloned into a single node test environment.
Default settings in the server that will open the clone database, also known as the target database:
a. Should be at the same version and patch level as the original RDBMS
b. Where DNSFS settings are made
c.Mounts using Oracle options
linux
#/etc/fstab
176.16.1.222:/export/CloneBackup/ZFSBackup/TestClone/nfs
rw, bg, hard, noint, rsize=32768, wsize=32768, tcp, actimeo=0, vers=3, timeo=6000 0
Step 1: Making a backup from the resource machine
1 2 3 4 5 6 7 |
run { sql 'alter database begin backup'; set nocfau; backup as copy database format '/ZFSBackup/TestClone/d_%U_.dbf' ; sql 'alter database end backup'; } |
Step 2: Setting the environment variables
1 2 3 4 |
export ORACLE_SID=TEST export MASTER_COPY_DIR=/ZFSBackup/TestClone/ # where the backup is saved to, this path should only contain the datafile export CLONE_FILE_CREATE_DEST=/u01/app/oracle/oradata/TEST # the restore location could be the same as the dnsf export CLONEDB_NAME=TEST |
Step 3: pfile
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
#$ORACLE_HOME/dbs/initTEST.ora *.audit_file_dest='/u01/app/oracle/admin/TEST/adump' *.audit_trail='DB' *.compatible='11.2.0.4.0' *.control_files='/u01/app/oracle/oradata/TEST/control01.ctl','/u01/app/oracle/oradata/TEST/control02.ctl' *.db_block_size=8192 *.db_create_file_dest='/u01/app/oracle/oradata/TEST/' *.db_domain='WORLD' *.db_name='TEST' *.db_recovery_file_dest_size=4070572032 *.db_recovery_file_dest='/u01/app/oracle/fast_recovery_area' *.diagnostic_dest='/u01/app/oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=TESTXDB)' *.event='' *.memory_target=843055104 *.open_cursors=300 *.processes=150 *.remote_login_passwordfile='EXCLUSIVE' *.undo_tablespace='UNDOTBS1' *.clonedb=true _no_recovery_through_resetlogs=TRUE |
1 2 |
mkdir -p /u01/app/oracle/admin/TEST/adump mkdır -p /u01/app/oracle/fast_recovery_area |
Step 4: Creating scripts
1 2 |
cd /ZFS/Backup/ clone.pl can be downloaded from Oracle support (Clone your dNFS Production Database for Testing (Doc ID 1210656.1)) |
crtdb.sql Opens the database in nomount mode and creates a control file
dbren.sql Renames the dbf files and opens the database with open resetlogs
1 |
perl /ZFSBackup/clone.pl /u01/app/oracle/product/11.2.0/db_1/dbs/initTEST.ora crtdb.sql dbren.sql |
1 2 3 |
$sqlplus / as sysdba sqlplus>@crtdb.sql sqlplus>@dbren.sql |
Log
1 |
tail -2000f /u01/app/oracle/diag/rdbms/test/TEST/trace/alert_TEST.log |
In case of error
1 |
#chmod 4755 $ORACLE_HOME/bin/oradism |
********************************
An example of an orafstab file
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 |
#An orafstab config file for a 2 node RAC #Node_1 172.1.1.151 172.1.1.152 #Node_2 172.1.1.151 172.1.1.152 #controller_1 172.1.1.209 172.1.1.210 172.1.1.211 172.1.1.212 #controller_2 172.1.1.221 172.1.1.222 172.1.1.223 172.1.1.224 NODE_1 #controller_1 server 172.1.1.205 local:172.1.1.151 path:172.1.1.205 local:172.1.1.152 path:172.1.1.205 export: /export/Cont1_orcl_1 mount:/ZFSbackup/Cont1_orcl_1 export: /export/Cont1_orcl_2 mount:/ZFSbackup/Cont1_orcl_2 server 172.1.1.206 local:172.1.1.151 path:172.1.1.206 local:172.1.1.152 path:172.1.1.206 export: /export/Cont1_orcl_1 mount:/ZFSbackup/Cont1_orcl_1 export: /export/Cont1_orcl_2 mount:/ZFSbackup/Cont1_orcl_2 server 172.1.1.207 local:172.1.1.151 path:172.1.1.207 local:172.1.1.152 path:172.1.1.207 export: /export/Cont1_orcl_1 mount:/ZFSbackup/Cont1_orcl_1 export: /export/Cont1_orcl_2 mount:/ZFSbackup/Cont1_orcl_2 server 172.1.1.208 local:172.1.1.151 path:172.1.1.208 local:172.1.1.152 path:172.1.1.208 export: /export/Cont1_orcl_1 mount:/ZFSbackup/Cont1_orcl_1 export: /export/Cont1_orcl_2 mount:/ZFSbackup/Cont1_orcl_2 #////////////////////////////// #controller_2 server 172.1.1.221 local:172.1.1.151 path:172.1.1.221 local:172.1.1.152 path:172.1.1.221 export: /export/Cont2_orcldbt_1 mount:/ZFSbackup/Cont2_orcldbt_1 export: /export/Cont2_orcldbt_2 mount:/ZFSbackup/Cont2_orcldbt_2 server 172.1.1.222 local:172.1.1.151 path:172.1.1.222 local:172.1.1.152 path:172.1.1.222 export: /export/Cont2_orcldbt_1 mount:/ZFSbackup/Cont2_orcldbt_1 export: /export/Cont2_orcldbt_2 mount:/ZFSbackup/Cont2_orcldbt_2 server 172.1.1.223 local:172.1.1.151 path:172.1.1.223 local:172.1.1.152 path:172.1.1.223 export: /export/Cont2_orcldbt_1 mount:/ZFSbackup/Cont2_orcldbt_1 export: /export/Cont2_orcldbt_2 mount:/ZFSbackup/Cont2_orcldbt_2 server 172.1.1.224 local:172.1.1.151 path:172.1.1.224 local:172.1.1.152 path:172.1.1.224 export: /export/Cont2_orcldbt_1 mount:/ZFSbackup/Cont2_orcldbt_1 export: /export/Cont2_orcldbt_2 mount:/ZFSbackup/Cont2_orcldbt_2 #////////////////////////////// #NODE_2 #controller_1 server 172.1.1.205 local:172.1.1.153 path:172.1.1.205 local:172.1.1.154 path:172.1.1.205 export: /export/Cont1_orcl_1 mount:/ZFSbackup/Cont1_orcl_1 export: /export/Cont1_orcl_2 mount:/ZFSbackup/Cont1_orcl_2 server 172.1.1.206 local:172.1.1.153 path:172.1.1.206 local:172.1.1.154 path:172.1.1.206 export: /export/Cont1_orcl_1 mount:/ZFSbackup/Cont1_orcl_1 export: /export/Cont1_orcl_2 mount:/ZFSbackup/Cont1_orcl_2 server 172.1.1.207 local:172.1.1.153 path:172.1.1.207 local:172.1.1.154 path:172.1.1.207 export: /export/Cont1_orcl_1 mount:/ZFSbackup/Cont1_orcl_1 export: /export/Cont1_orcl_2 mount:/ZFSbackup/Cont1_orcl_2 server 172.1.1.208 local:172.1.1.153 path:172.1.1.208 local:172.1.1.154 path:172.1.1.208 export: /export/Cont1_orcl_1 mount:/ZFSbackup/Cont1_orcl_1 export: /export/Cont1_orcl_2 mount:/ZFSbackup/Cont1_orcl_2 #controller_2 server 172.1.1.221 local:172.1.1.153 path:172.1.1.221 local:172.1.1.154 path:172.1.1.221 export: /export/Cont2_orcldbt_1 mount:/ZFSbackup/Cont2_orcldbt_1 export: /export/Cont2_orcldbt_2 mount:/ZFSbackup/Cont2_orcldbt_2 server 172.1.1.222 local:172.1.1.153 path:172.1.1.222 local:172.1.1.154 path:172.1.1.222 export: /export/Cont2_orcldbt_1 mount:/ZFSbackup/Cont2_orcldbt_1 export: /export/Cont2_orcldbt_2 mount:/ZFSbackup/Cont2_orcldbt_2 server 172.1.1.223 local:172.1.1.153 path:172.1.1.223 local:172.1.1.154 path:172.1.1.223 export: /export/Cont2_orcldbt_1 mount:/ZFSbackup/Cont2_orcldbt_1 export: /export/Cont2_orcldbt_2 mount:/ZFSbackup/Cont2_orcldbt_2 server 172.1.1.224 local:172.1.1.153 path:172.1.1.224 local:172.1.1.154 path:172.1.1.224 export: /export/Cont2_orcldbt_1 mount:/ZFSbackup/Cont2_orcldbt_1 export: /export/Cont2_orcldbt_2 mount:/ZFSbackup/Cont2_orcldbt_2 |
Using RMAN for...
12 March 2019