Tibero Active Cluster Kurulumu
Oracle’dan sonra Tibero’nun da aktif aktif kluster (TAC) desteklediğini biliyormuydunuz. Aslında Tibero 2008’den itibaren aktif klusterı desteklemektedir. Bu yazımızda Tibero aktif klusterın kurulumunu adım adım inceleyeceğiz. Öncesinde Oracle’ın ASM (automatic storage management) 'ine karşılık gelen Tibero Active Storage (TAS)’ı yapılandıracağız.
Kurulum dosyaları ve detaylı dökümanlar için technet.tmaxsof.com adresini ziyaret edebilirsiniz.
Virtual box Environment
| Node1 | Node2 | |
| VM name | node1 | node2 |
| hostname | node1 | node2 |
| OS | SL7.1 | SL7.1 |
| Memory | 3GB | 3GB |
| Public | 192.168.2.21 | 192.168.2.22 |
| Inter connect | 192.168.56.21 | 192.168.56.22 |
Tibero Active Storage Adımları
Tibero Active Storage Adımları
|
1 |
VBoxManage showvminfo NODE1 |
Step_1 Kullanıcı ve grupların oluşturulması (her iki nodda)
|
1 2 |
/usr/sbin/groupadd -g 501 dba /usr/sbin/useradd -u 502 -g dba tibero |
Step_2 Klasörlerin oluşturulması (her iki nodda)
|
1 2 3 |
mkdir -p /Tibero/tas mkdir -p /Tibero/tac chown -R tibero:dba /Tibero |
Step_3 Gerekli paketlerin kurulması (her iki nodda)
|
1 2 3 4 5 6 7 8 |
yum install -y gcc-* yum install -y gcc-c++-* yum install -y libgcc-* yum install -y libstdc++-* yum install -y libstdc++-devel-* yum install -y compat-libstdc++-* yum install -y libaio-* yum install -y libaio-devel-* |
Step_4 İşletim sistemi ayarları (her iki nodda)
|
1 2 |
cp /etc/security/limits.conf /etc/security/limits.conf_bck echo -e 'tibero hard nofile 65536\ntibero soft nofile 4096\ntibero soft nproc 4096\ntibero hard nproc 16384' >> /etc/security/limits.conf |
|
1 2 |
echo session required pam_limits.so >> /etc/pam.d/login cp /etc/sysctl.conf /etc/sysctl.conf_bck |
|
1 2 |
cp /etc/sysctl.conf /etc/sysctl.conf_bck echo -e 'kernel.shmmni = 4096\nkernel.sem = 10000 32000 10000 10000\nfs.file-max = 6553600\nnet.ipv4.ip_local_port_range = 1024 65000\nnet.core.rmem_default = 262144\nnet.core.rmem_max = 4194304\nnet.core.wmem_default = 262144\nnet.core.wmem_max = 1048576' >> /etc/sysctl.conf |
Virtual Box Adımları
Step_5 Paylaşımlı disklerin oluşturulması (host-fiziksel sisteminde )
Default disk for control files (high redundancy) 512M*3
|
1 2 3 |
VBoxManage createhd --filename /Data/VM/Shared_Disks/TAS01.vdi --size 512 --format VDI --variant Fixed VBoxManage createhd --filename /Data/VM/Shared_Disks/TAS02.vdi --size 512 --format VDI --variant Fixed VBoxManage createhd --filename /Data/VM/Shared_Disks/TAS03.vdi --size 512 --format VDI --variant Fixed |
DATA1 disk group
|
1 2 |
VBoxManage createhd --filename /Data/VM/Shared_Disks/TAS04.vdi --size 2048 --format VDI --variant Fixed VBoxManage createhd --filename /Data/VM/Shared_Disks/TAS05.vdi --size 2048 --format VDI --variant Fixed |
RA1 disk group
|
1 2 |
VBoxManage createhd --filename /Data/VM/Shared_Disks/TAS06.vdi --size 2048 --format VDI --variant Fixed VBoxManage createhd --filename /Data/VM/Shared_Disks/TAS07.vdi --size 2048 --format VDI --variant Fixed |
|
1 2 3 4 5 6 7 |
VBoxManage modifyhd /Data/VM/Shared_Disks/TAS01.vdi --type shareable VBoxManage modifyhd /Data/VM/Shared_Disks/TAS02.vdi --type shareable VBoxManage modifyhd /Data/VM/Shared_Disks/TAS03.vdi --type shareable VBoxManage modifyhd /Data/VM/Shared_Disks/TAS04.vdi --type shareable VBoxManage modifyhd /Data/VM/Shared_Disks/TAS05.vdi --type shareable VBoxManage modifyhd /Data/VM/Shared_Disks/TAS06.vdi --type shareable VBoxManage modifyhd /Data/VM/Shared_Disks/TAS07.vdi --type shareable |
Adding disks to nodes
node1
|
1 2 3 4 5 6 7 |
VBoxManage storageattach node1 --storagectl "SATA" --port 3 --device 0 --type hdd --medium /Data/VM/Shared_Disks/TAS01.vdi --mtype shareable VBoxManage storageattach node1 --storagectl "SATA" --port 4 --device 0 --type hdd --medium /Data/VM/Shared_Disks/TAS02.vdi --mtype shareable VBoxManage storageattach node1 --storagectl "SATA" --port 5 --device 0 --type hdd --medium /Data/VM/Shared_Disks/TAS03.vdi --mtype shareable VBoxManage storageattach node1 --storagectl "SATA" --port 6 --device 0 --type hdd --medium /Data/VM/Shared_Disks/TAS04.vdi --mtype shareable VBoxManage storageattach node1 --storagectl "SATA" --port 7 --device 0 --type hdd --medium /Data/VM/Shared_Disks/TAS05.vdi --mtype shareable VBoxManage storageattach node1 --storagectl "SATA" --port 8 --device 0 --type hdd --medium /Data/VM/Shared_Disks/TAS06.vdi --mtype shareable VBoxManage storageattach node1 --storagectl "SATA" --port 9 --device 0 --type hdd --medium /Data/VM/Shared_Disks/TAS07.vdi --mtype shareable |
node2
|
1 2 3 4 5 6 7 |
VBoxManage storageattach node2 --storagectl "SATA" --port 3 --device 0 --type hdd --medium /Data/VM/Shared_Disks/TAS01.vdi --mtype shareable VBoxManage storageattach node2 --storagectl "SATA" --port 4 --device 0 --type hdd --medium /Data/VM/Shared_Disks/TAS02.vdi --mtype shareable VBoxManage storageattach node2 --storagectl "SATA" --port 5 --device 0 --type hdd --medium /Data/VM/Shared_Disks/TAS03.vdi --mtype shareable VBoxManage storageattach node2 --storagectl "SATA" --port 6 --device 0 --type hdd --medium /Data/VM/Shared_Disks/TAS04.vdi --mtype shareable VBoxManage storageattach node2 --storagectl "SATA" --port 7 --device 0 --type hdd --medium /Data/VM/Shared_Disks/TAS05.vdi --mtype shareable VBoxManage storageattach node2 --storagectl "SATA" --port 8 --device 0 --type hdd --medium /Data/VM/Shared_Disks/TAS06.vdi --mtype shareable VBoxManage storageattach node2 --storagectl "SATA" --port 9 --device 0 --type hdd --medium /Data/VM/Shared_Disks/TAS07.vdi --mtype shareable |
Step_6 paylaşımlı disklerin ayarlanması (node1′ de)
|
1 |
fdisk /dev/sdx |
Her iki nodda (fdisk in yapıldığı node haricindekileri reboot etmek gerekebiliyor, disklerin parttiton larını görmesi için)
|
1 2 3 4 5 6 7 |
sudo chown tibero:dba /dev/sdb1 sudo chown tibero:dba /dev/sdc1 sudo chown tibero:dba /dev/sdd1 sudo chown tibero:dba /dev/sde1 sudo chown tibero:dba /dev/sdf1 sudo chown tibero:dba /dev/sdg1 sudo chown tibero:dba /dev/sdh1 |
|
1 2 3 4 5 6 7 |
sudo chmod 660 /dev/sdb1 sudo chmod 660 /dev/sdc1 sudo chmod 660 /dev/sdd1 sudo chmod 660 /dev/sde1 sudo chmod 660 /dev/sdf1 sudo chmod 660 /dev/sdg1 sudo chmod 660 /dev/sdh1 |
Step_7 Binari dosyaların açılması (her iki nodda)
|
1 |
tar -xvf /Data/SetUp/tibero6-bin-FS02-linux64-114545-opt-20151210192436-tested.tar.gz -C /Tibero/tas/ |
Step_8 .bash_profile ayarları
Sayfanın sonunda örnek .bash_profile mevcut
Step_9 parametre dosyası
node _ 1
vi /Tibero/tas/tibero6/config/tas1.tip
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
DB_NAME=tas LISTENER_PORT=9620 MAX_SESSION_COUNT=200 TOTAL_SHM_SIZE=512M MEMORY_TARGET=1G INSTANCE_TYPE=TAS TAS_DISKSTRING="/dev/sdb*,/dev/sdc*,/dev/sdd*,/dev/sde*,/dev/sdf*,/dev/sdh*,/dev/sdg*" CLUSTER_DATABASE=Y LOCAL_CLUSTER_ADDR=192.168.56.21 LOCAL_CLUSTER_PORT=20000 CM_CLUSTER_MODE=ACTIVE_SHARED CM_PORT=20005 THREAD=0 |
node _ 2
vi /Tibero/tas/tibero6/config/tas2.tip
|
1 2 3 4 5 6 7 8 9 10 11 12 13 |
DB_NAME=tas LISTENER_PORT=9620 MAX_SESSION_COUNT=200 TOTAL_SHM_SIZE=512M MEMORY_TARGET=1G INSTANCE_TYPE=TAS TAS_DISKSTRING="/dev/sdb*,/dev/sdc*,/dev/sdd*,/dev/sde*,/dev/sdf*,/dev/sdh*,/dev/sdg*" CLUSTER_DATABASE=Y LOCAL_CLUSTER_ADDR=192.168.56.22 LOCAL_CLUSTER_PORT=20000 CM_CLUSTER_MODE=ACTIVE_SHARED CM_PORT=20005 THREAD=1 |
Step_10 tbdsn.tbr (tnsnames ora)
node _ 1
vim /Tibero/tas/tibero6/client/config/tbdsn.tbr
|
1 2 3 4 5 6 |
tas1=( (INSTANCE=(HOST=localhost) (PORT=9620) (DB_NAME=tas1) ) ) |
node _ 2vim /Tibero/tas/tibero6/client/config/tbdsn.tbr
|
1 2 3 4 5 6 |
tas2=( (INSTANCE=(HOST=localhost) (PORT=9620) (DB_NAME=tas2) ) ) |
Step_11 license dosyaları (technet.tmaxsoft.com dan talep etmek gerekiyor.) (her iki nodda)
|
1 |
cp license.xml cp /Tibero/tas/tibero6/license/license.xml |
TAS kluster olarak çalıştırmadan önce node1’de default disk space oluşturulur.
Step_12 default disk space oluşturulması
TB_SID=tas1
|
1 |
tbboot -t nomount |
Default Disk Space (tek nodda)
|
1 2 3 4 5 6 7 8 |
CREATE DISKSPACE ds0 HIGH REDUNDANCY FAILGROUP fg1 DISK '/dev/sdb1' NAME sdb1 FAILGROUP fg2 DISK '/dev/sdc1' NAME sdc1 FAILGROUP fg3 DISK '/dev/sdd1' NAME sdd1 ATTRIBUTE 'AU_SIZE'='1M'; |
Step_13 firewall ve selinux disabled modda olmalı (her iki nodda)
|
1 2 3 |
systemctl disable firewalld systemctl stop firewalld systemctl status firewalld |
Step_14 starting up tas instance
node _ 1
TB_SID=tas1
|
1 2 3 4 |
tbcm -c # cmfile in the default disk space. ---> SUCCESS TB_SID=tas1 tbcm -b |
kluster durumunun kontrol edilmesi
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
[tibero@node1 ~]$ tbcm -s ======================= LOCAL STATUS =========================== NODE NAME : [101] cm@192.168.56.21:20005 CLUSTER MODE : ACTIVE_SHARED (GUARD ON, FENCE OFF) STATUS : NODE ACTIVE INCARNATION_NO : 1 (ACK 1, COMMIT 1) HEARTBEAT PERIOD : 30 ticks (1 tick = 1000000 micro-sec) SVC PROBE PERIOD : 10 ticks (expires 0 ticks later) SVC DOWN CMD : "/Tibero/tas/tibero6/scripts/cm_down_cmd.sh" CONTROL FILE No.0 (A): +0 (512 byte/block)[VALID] CONTROL FILE No.1 (A): +1 (512 byte/block)[VALID] CONTROL FILE No.2 (A): +2 (512 byte/block)[VALID] CONTROL FILE EXPIRE: 30 ticks later LOG LEVEL : 2 ======================= CLUSTER STATUS ========================= INCARNATION_NO : 1 (COMMIT 1) FILE HEADER SIZE : 1024 bytes ( 512 byte-block ) # of NODES : 1 nodes (LAST_ID = 101) MASTER NODE : [101] cm@192.168.56.21:20005 MEMBERSHIP : AUTO (SPLIT) NODE LIST... (R:role, Scd:scheduled, F/O: index of VIP failover node) H/B F/O VIP Idx R Scd Node Status offset Idx Alias ID Name --- - --- --------------- ------ --- ------ --- --------------- #0 M ON NODE ACTIVE 1024 N/A N/A 101 cm@192.168.56.21:20005 ===================== OTHER NODE STATUS ======================== |
|
1 2 3 |
tbboot tbsql sys/tibero ALTER DISKSPACE ds0 ADD THREAD 1; |
node _ 2
|
1 2 3 |
TB_SID=tas2 tbcm -b tbboot |
checking node 2
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
[tibero@node2 log]$ tbcm -s ======================= LOCAL STATUS =========================== NODE NAME : [102] cm@192.168.56.22:20005 CLUSTER MODE : ACTIVE_SHARED (GUARD ON, FENCE OFF) STATUS : SERVICE ACTIVE INCARNATION_NO : 4 (ACK 4, COMMIT 4) HEARTBEAT PERIOD : 30 ticks (1 tick = 1000000 micro-sec) SVC PROBE PERIOD : 10 ticks (expires 10 ticks later) SVC DOWN CMD : "/Tibero/tas/tibero6/scripts/cm_down_cmd.sh" CONTROL FILE No.0 (A): +0 (512 byte/block)[VALID] CONTROL FILE No.1 (A): +1 (512 byte/block)[VALID] CONTROL FILE No.2 (A): +2 (512 byte/block)[VALID] CONTROL FILE EXPIRE: 29 ticks later LOG LEVEL : 2 ======================= CLUSTER STATUS ========================= INCARNATION_NO : 4 (COMMIT 4) FILE HEADER SIZE : 1024 bytes ( 512 byte-block ) # of NODES : 2 nodes (LAST_ID = 102) MASTER NODE : [101] cm@192.168.56.21:20005 MEMBERSHIP : AUTO (SPLIT) NODE LIST... (R:role, Scd:scheduled, F/O: index of VIP failover node) H/B F/O VIP Idx R Scd Node Status offset Idx Alias ID Name --- - --- --------------- ------ --- ------ --- --------------- #0 M ON SERVICE ACTIVE 1024 N/A N/A 101 cm@192.168.56.21:20005 #1 S ON SERVICE ACTIVE 1536 N/A N/A 102 cm@192.168.56.22:20005 ===================== OTHER NODE STATUS ======================== SEQ (NAME) : #0 ([101] cm@192.168.56.21:20005) STATUS (CONN.) : SERVICE ACTIVE (CONNECTED) NET ADDR (PORT) : 192.168.56.21 (20005) |
Step_15 data disk grubunun oluşturulması (tek nodda)
|
1 2 3 4 5 6 |
CREATE DISKSPACE DATA1 NORMAL REDUNDANCY FAILGROUP fg1 DISK '/dev/sde1' NAME sde1 FAILGROUP fg2 DISK '/dev/sdf1' NAME sdf1 ATTRIBUTE 'AU_SIZE'='1M'; |
|
1 2 3 4 5 6 |
CREATE DISKSPACE RA1 NORMAL REDUNDANCY FAILGROUP fg1 DISK '/dev/sdg1' NAME sdg1 FAILGROUP fg2 DISK '/dev/sdh1' NAME sdh1 ATTRIBUTE 'AU_SIZE'='1M'; |
log files
|
1 2 |
/Tibero/tas/tibero6/instance/as0/log/cm/cmd.log /Tibero/tas/tibero6/instance/as0/log/cm/trace_list.log |
TAS Monitoring
|
1 2 3 4 5 6 |
select * from V$TAS_ALIAS; select * from V$TAS_DISKSPACE; select * from V$TAS_DISK_STAT; select * from V$TAS_DISKSPACE_STAT; select * from V$TAS_FILE; select * from V$TAS_DISK; |
Tibero Active Cluster adımları
Step_16 binari dosyalarının açılması (her iki nodda)
|
1 2 3 |
tar -xvf /Data/SetUp/tibero6-bin-FS02-linux64-114545-opt-20151210192436-tested.tar.gz -C /Tibero/tac/ chmod -Rf g+rwx /Tibero/tac sh /Tibero/tac/tibero6/config/gen_tip.sh |
Step_17 tac parameter file
node_1
vim /Tibero/tac/tibero6/config/tac1.tip
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
DB_NAME=tbrdb LISTENER_PORT=8620 CONTROL_FILES="+DATA1/c1.ctl" USE_TAS=Y TAS_PORT=9620 DB_CREATE_FILE_DEST="+DATA1" LOG_ARCHIVE_DEST="+RA1" MAX_SESSION_COUNT=20 TOTAL_SHM_SIZE=512M MEMORY_TARGET=1G CLUSTER_DATABASE=Y LOCAL_CLUSTER_ADDR=192.168.56.21 LOCAL_CLUSTER_PORT=8625 THREAD=0 UNDO_TABLESPACE=UNDO0 CM_CLUSTER_MODE=ACTIVE_SHARED CM_FILE_NAME="+DS0/tbcm" CM_PORT=8630 CM_HEARTBEAT_EXPIRE=15 CM_WATCHDOG_EXPIRE=10 CM_NET_EXPIRE_MARGIN=5 |
node_2
vim /Tibero/tac/tibero6/config/tac2.tip
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
DB_NAME=tbrdb LISTENER_PORT=8620 CONTROL_FILES="+DATA1/c1.ctl" USE_TAS=Y TAS_PORT=9620 DB_CREATE_FILE_DEST="+DATA1" LOG_ARCHIVE_DEST="+RA1" MAX_SESSION_COUNT=20 TOTAL_SHM_SIZE=512M MEMORY_TARGET=1G CLUSTER_DATABASE=Y LOCAL_CLUSTER_ADDR=192.168.56.22 LOCAL_CLUSTER_PORT=8625 THREAD=1 UNDO_TABLESPACE=UNDO1 CM_CLUSTER_MODE=ACTIVE_SHARED CM_FILE_NAME="+DS0/tbcm" CM_PORT=8630 CM_HEARTBEAT_EXPIRE=15 CM_WATCHDOG_EXPIRE=10 CM_NET_EXPIRE_MARGIN=5 |
Step_18 tbdsn.tbr (tnsname.ora)
Node _ 1
vim /Tibero/tibero6/client/config/tbdsn.tbr
|
1 2 3 4 5 6 |
tac1=( (INSTANCE=(HOST=localhost) (PORT=8620) (DB_NAME=tbrdb) ) ) |
node _ 2
|
1 2 3 4 5 6 |
tac2=( (INSTANCE=(HOST=localhost) (PORT=8620) (DB_NAME=tbrdb) ) ) |
Step_19 license dosyası (her iki nodda)
|
1 |
cp /Tibero/tas/tibero6/license/license.xml /Tibero/tac/tibero6/license/ |
Step_20 starting up tac instance
on node _ 1
|
1 2 3 4 |
tbcm –c ----> success tbcm –b LOCK tbboot -t nomount –c tbsql sys/tibero |
Step_21 creating database
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
create database character set WE8ISO8859P9 logfile group 1 size 20m, group 2 size 20m, group 3 size 20m archivelog datafile 'system001.dtf' size 100m autoextend on next 10m default temporary tablespace temp tempfile 'temp001.dtf' size 100m autoextend on next 10m undo tablespace undo0 datafile 'undo001.dtf' size 200m autoextend on next 10m default tablespace usr datafile 'usr001.dtf' size 100m autoextend on next 10m |
|
1 |
tbboot |
Step_22 diğer nod/nodlar için undo01 ve redolog ların oluşurulması (tek nodda)
|
1 2 3 4 5 6 7 8 9 10 |
tbsql sys/tibero CREATE UNDO TABLESPACE UNDO1 DATAFILE 'undo011.dtf' SIZE 200M AUTOEXTEND ON NEXT 50M ; ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 4 size 20M; ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 5 size 20M; ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 6 size 20M; ALTER DATABASE ENABLE PUBLIC THREAD 1; |
Step_23 executing system.sh (tek nodda)
|
1 2 3 4 |
$TB_HOME/scripts/system.sh sys --> tibero syscat --> syscat |
Step_24 Kontroller
|
1 |
cat /Tibero/tibero6/instance/tac1/log/system_init.log |
|
1 2 3 4 5 6 7 8 9 10 11 12 |
col member for a30 Select * from v$logfile; GROUP# STATUS TYPE MEMBER -------------------------- 1 ONLINE +DATA1/log001.log 2 ONLINE +DATA1/log002.log 3 ONLINE +DATA1/log003.log 4 ONLINE +DATA1/log004.log 5 ONLINE +DATA1/log005.log 6 ONLINE +DATA1/log006.log 6 rows selected. |
|
1 2 3 4 5 |
col db_name for a10 select instance_name, db_name,version, status from v$instance; INSTANCE_NAME DB_NAME VERSION STATUS ------------------------------------- tac1 tbrdb 6 NORMAL |
|
1 2 3 4 5 6 7 8 |
col owner for a10 select owner from dba_objects where object_name ='DATABASE_PROPERTIES'; OWNER ---------- PUBLIC SYSCAT 2 rows selected. |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 |
set lin 200 col name for a30 col value for a15 col comment_str for a40 select * from database_properties; NAME VALUE COMMENT_STR ---------------------------------------------------- DFLT_PERM_TS USR Name of default permanent tablespace DFLT_TEMP_TS TEMP Name of default temporary tablespace DFLT_UNDO_TS UNDO0 Name of default undo tablespace NLS_CHARACTERSET WE8ISO8859P9 NLS_NCHAR_CHARACTERSET UTF16 DB_NAME tbrdb database name |
Step_24 node2′ nin başlatılması
node _ 1
|
1 2 3 4 5 |
tbdown tbcm -d tbcm -b tbboot |
node _ 2
|
1 2 |
tbcm –b tbboot |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
[tibero@node2 ~]$ tbcm -s ======================= LOCAL STATUS =========================== NODE NAME : [102] cm@192.168.56.22:8630 CLUSTER MODE : ACTIVE_SHARED (GUARD ON, FENCE OFF) STATUS : SERVICE ACTIVE INCARNATION_NO : 4 (ACK 4, COMMIT 4) HEARTBEAT PERIOD : 15 ticks (1 tick = 1000000 micro-sec) SVC PROBE PERIOD : 10 ticks (expires 10 ticks later) SVC DOWN CMD : "/Tibero/tac/tibero6/scripts/cm_down_cmd.sh" CONTROL FILE (A) : +DS0/tbcm (512 byte/block) CONTROL FILE EXPIRE: 14 ticks later LOG LEVEL : 2 ======================= CLUSTER STATUS ========================= INCARNATION_NO : 4 (COMMIT 4) FILE HEADER SIZE : 1024 bytes ( 512 byte-block ) # of NODES : 2 nodes (LAST_ID = 102) MASTER NODE : [101] cm@192.168.56.21:8630 MEMBERSHIP : AUTO (SPLIT) NODE LIST... (R:role, Scd:scheduled, F/O: index of VIP failover node) H/B F/O VIP Idx R Scd Node Status offset Idx Alias ID Name --- - --- --------------- ------ --- ------ --- --------------- #0 M ON SERVICE ACTIVE 1024 N/A N/A 101 cm@192.168.56.21:8630 #1 S ON SERVICE ACTIVE 1536 N/A N/A 102 cm@192.168.56.22:8630 ===================== OTHER NODE STATUS ======================== SEQ (NAME) : #0 ([101] cm@192.168.56.21:8630) STATUS (CONN.) : SERVICE ACTIVE (CONNECTED) NET ADDR (PORT) : 192.168.56.21 (8630) |
Failover Testleri için
ağ üzerinden herhangi client a aşağıdaki erişim bilgileri girilerek,
tbsql sys/tibero@tac ->iler testler yapılabilir.
/Tibero/tibero6/client/config/tbdsn.tbr
|
1 2 3 4 5 6 7 8 9 10 11 12 13 |
tac =( (INSTANCE=(HOST=192.168.10.23) (PORT=8620) (DB_NAME=tbrdb) ) (INSTANCE=(HOST=192.168.10.24) (PORT=8620) (DB_NAME=tbrdb) ) (LOAD_BALANCE=Y) (USE_FAILOVER=Y) ) |
Profiles
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
# .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin:/sbin export PATH #################################################### #################################################### echo "Enter your Choice" echo "1) Tibero TAC - node1" echo "2) Tibero TAC - tas1" read ans if [ $ans -eq '1' ]; then ######## TIBERO ENV ######## export TB_HOME=/Tibero/tac/tibero6 export TB_SID=node1 export TB_PROF_DIR=$TB_HOME/bin/prof export PATH=.:$TB_HOME/bin:$TB_HOME/client/bin:~/tbinary/monitor:$PATH export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$LD_LIBRARY_PATH export SHLIB_PATH=$LD_LIBRARY_PATH:$SHLIB_PATH export LIBPATH=$LD_LIBRARY_PATH:$LIBPATH ######## TIBERO alias ######## alias tbhome='cd $TB_HOME' alias tbbin='cd $TB_HOME/bin' alias tblog='cd $TB_HOME/instance/$TB_SID/log' alias tbcfg='cd $TB_HOME/config' alias tbcfgv='vi $TB_HOME/config/$TB_SID.tip' alias tbcli='cd ${TB_HOME}/client/config' alias tbcliv='vi ${TB_HOME}/client/config/tbdsn.tbr' #alias tbcliv='vi ${TB_HOME}/client/config/tbnet_alias.tbr'alias tbi='cd ~/tbinary' #alias clean='tbdown clean' #alias dba='tbsql sys/tibero' alias tm='cd ~/tbinary/monitor;monitor;cd -' #alias tbdata='cd $TB_HOME/tbdata' #################################################### #################################################### elif [ $ans -eq '2' ]; then ######## TIBERO ENV ######## export TB_HOME=/Tibero/tas/tibero6 export TB_SID=tas1 export TB_PROF_DIR=$TB_HOME/bin/prof export PATH=.:$TB_HOME/bin:$TB_HOME/client/bin:~/tbinary/monitor:$PATH export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$LD_LIBRARY_PATH export SHLIB_PATH=$LD_LIBRARY_PATH:$SHLIB_PATH export LIBPATH=$LD_LIBRARY_PATH:$LIBPATH ######## TIBERO alias ######## alias tbhome='cd $TB_HOME' alias tbbin='cd $TB_HOME/bin' alias tblog='cd $TB_HOME/instance/$TB_SID/log' alias tbcfg='cd $TB_HOME/config' alias tbcfgv='vi $TB_HOME/config/$TB_SID.tip' alias tbcli='cd ${TB_HOME}/client/config' alias tbcliv='vi ${TB_HOME}/client/config/tbdsn.tbr' #alias tbcliv='vi ${TB_HOME}/client/config/tbnet_alias.tbr'alias tbi='cd ~/tbinary' #alias clean='tbdown clean' #alias dba='tbsql sys/tibero' alias tm='cd ~/tbinary/monitor;monitor;cd -' #alias tbdata='cd $TB_HOME/tbdata' else echo "Tibero Environment Not Set ! ! !" fi #################################################### #################################################### |
Log
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
alias tracelog="tail -f ${TB_HOME}/instance/tas1/log/tracelog/trace.log" alias dbmslog="tail -f ${TB_HOME}/instance/tas1/log/dbmslog/dbms.log" alias listenerlog="tail -f ${TB_HOME}/instance/tas1/log/lsnr/trace_list.log" alias tracelistlog="tail -f ${TB_HOME}/instance/tas1/log/cm/trace_list.log" alias th='cd $TB_HOME' [crayon-691d779479bfc312704556]alias tbbin="cd ${TB_HOME}/bin" alias tblog="cd ${TB_HOME}/instance/$TB_SID/log" alias tbcfg="cd ${TB_HOME}/config" alias tbcfgv="vi ${TB_HOME}/config/$TB_SID.tip" alias tbcli="cd ${TB_HOME}/client/config" alias tbcliv="vi ${TB_HOME}/client/config/tbdsn.tbr" #alias tbcliv='vi ${TB_HOME}/client/config/tbnet_alias.tbr' #alias tbdata='cd $TB_HOME/tbdata' alias tbi='cd ~/tbinary' alias clean='tbdown clean' alias dba='tbsql sys/tibero' alias tbps="ps -ef |grep tbs" ########################## alias c='clear' alias ls='ls -h --color' alias lx='ls -lXB' # Sort by extension. alias lk='ls -lSr' # Sort by size, biggest last. alias lt='ls -ltr' # Sort by date, most recent last. alias lc='ls -ltcr' # Sort by/show change time,most recent last. alias lu='ls -ltur' # Sort by/show access time,most recent last. # The ubiquitous 'll': directories first, with alphanumeric sorting: alias ll="ls -lv --group-directories-first" alias lm='ll |more' # Pipe through 'more' alias lr='ll -R' # Recursive ls. alias la='ll -A' # Show hidden files. alias tree='tree -Csuh' # Nice alternative to 'recursive ls' ... |
