{"id":2728,"date":"2019-03-13T12:26:16","date_gmt":"2019-03-13T09:26:16","guid":{"rendered":"https:\/\/sysdba.org\/?p=2728"},"modified":"2019-03-13T12:26:16","modified_gmt":"2019-03-13T09:26:16","slug":"tibero-active-cluster-installation","status":"publish","type":"post","link":"https:\/\/sysdba.org\/en\/tibero-active-cluster-installation\/","title":{"rendered":"Tibero Active Cluster Installation"},"content":{"rendered":"<p>Finally another RDBMS solution is supporting active active cluster like Oracle&#8217;s RAC . Actually active cluster supported by Tibero since 2008.<br \/>\nToday we will see how to install Tibero Active Cluster (TAC) database on 2 Linux 64 bit Virtual Machines using VirtualBox<br \/>\nBefore TAC installation we should be completed Tibero Active Storage (TAS) (like Orale&#8217;s ASM) steps for shared storage area.<br \/>\nFor installation files and documentation you can visit technet.tmaxsoft.com<\/p>\n<p><strong>Virtual box Environment<\/strong><\/p>\n<table style=\"width: 625px;\">\n<tbody>\n<tr>\n<td style=\"width: 297.781px;\"><\/td>\n<td style=\"width: 339.219px;\"><strong>Node1<\/strong><\/td>\n<td style=\"width: 339.219px;\"><strong>Node2<\/strong><\/td>\n<\/tr>\n<tr>\n<td style=\"width: 297.781px;\">VM name<\/td>\n<td style=\"width: 339.219px;\">node1<\/td>\n<td style=\"width: 339.219px;\">node2<\/td>\n<\/tr>\n<tr>\n<td style=\"width: 297.781px;\">hostname<\/td>\n<td style=\"width: 339.219px;\">node1<\/td>\n<td style=\"width: 339.219px;\">node2<\/td>\n<\/tr>\n<tr>\n<td style=\"width: 297.781px;\">OS<\/td>\n<td style=\"width: 339.219px;\">SL7.1<\/td>\n<td style=\"width: 339.219px;\">SL7.1<\/td>\n<\/tr>\n<tr>\n<td style=\"width: 297.781px;\">Memory<\/td>\n<td style=\"width: 339.219px;\">3GB<\/td>\n<td style=\"width: 339.219px;\">3GB<\/td>\n<\/tr>\n<tr>\n<td style=\"width: 297.781px;\">Public<\/td>\n<td style=\"width: 339.219px;\">192.168.2.21<\/td>\n<td style=\"width: 339.219px;\">192.168.2.22<\/td>\n<\/tr>\n<tr>\n<td style=\"width: 297.781px;\">Inter connect<\/td>\n<td style=\"width: 339.219px;\">\u00a0192.168.56.21<\/td>\n<td style=\"width: 339.219px;\">\u00a0192.168.56.22<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><strong>\u00a0 \u00a0<\/strong><br \/>\n<span style=\"font-size: 14pt;\"><strong>\u00a0Tibero Active Storage Steps<\/strong><\/span><\/p>\n<p>[crayon]VBoxManage showvminfo NODE1<br \/>\n[\/crayon]<\/p>\n<p><strong>Step_1 creating users\/groups (on both nodes)<\/strong><br \/>\n[crayon]\/usr\/sbin\/groupadd -g 501 dba<br \/>\n\/usr\/sbin\/useradd -u 502 -g dba tibero<br \/>\n[\/crayon]<\/p>\n<p><strong>Step_2 Creating directories (on both nodes)<\/strong><br \/>\n[crayon]mkdir -p \/Tibero\/tas<br \/>\nmkdir -p \/Tibero\/tac<br \/>\nchown -R tibero:dba \/Tibero<br \/>\n[\/crayon]<\/p>\n<p><strong>Step_3 installing required packages (on both nodes)<\/strong><br \/>\n[crayon]yum install -y gcc-*<br \/>\nyum install -y gcc-c++-*<br \/>\nyum install -y libgcc-*<br \/>\nyum install -y libstdc++-*<br \/>\nyum install -y libstdc++-devel-*<br \/>\nyum install -y compat-libstdc++-*<br \/>\nyum install -y libaio-*<br \/>\nyum install -y libaio-devel-*<br \/>\n[\/crayon]<\/p>\n<p><strong>Step_4 configuring operating systems (on both nodes)<\/strong><br \/>\n[crayon]cp \/etc\/security\/limits.conf \/etc\/security\/limits.conf_bck<br \/>\necho -e &#8216;tibero hard nofile 65536\\ntibero soft nofile 4096\\ntibero soft nproc 4096\\ntibero hard nproc 16384&#8217; &gt;&gt; \/etc\/security\/limits.conf<br \/>\n[\/crayon]<\/p>\n<p>[crayon]echo session required pam_limits.so &gt;&gt; \/etc\/pam.d\/login<br \/>\ncp \/etc\/sysctl.conf \/etc\/sysctl.conf_bck<br \/>\n[\/crayon]<\/p>\n<p>[crayon]cp \/etc\/sysctl.conf \/etc\/sysctl.conf_bck<br \/>\necho -e &#8216;kernel.shmmni = 4096\\nkernel.sem = 10000 32000 10000 10000\\nfs.file-max = 6553600\\nnet.ipv4.ip_local_port_range = 1024 65000\\nnet.core.rmem_default = 262144\\nnet.core.rmem_max = 4194304\\nnet.core.wmem_default = 262144\\nnet.core.wmem_max = 1048576&#8217; &gt;&gt; \/etc\/sysctl.conf<br \/>\n[\/crayon]<\/p>\n<p><strong>Virtual Box Steps<\/strong><br \/>\n<strong>Step_5 creating Shared Disks disks (on host operating system)<\/strong><br \/>\nDefault disk for control files (high redundancy) 512M*3<br \/>\n[crayon]VBoxManage createhd &#8211;filename \/Data\/VM\/Shared_Disks\/TAS01.vdi &#8211;size 512 &#8211;format VDI &#8211;variant Fixed<br \/>\nVBoxManage createhd &#8211;filename \/Data\/VM\/Shared_Disks\/TAS02.vdi &#8211;size 512 &#8211;format VDI &#8211;variant Fixed<br \/>\nVBoxManage createhd &#8211;filename \/Data\/VM\/Shared_Disks\/TAS03.vdi &#8211;size 512 &#8211;format VDI &#8211;variant Fixed<br \/>\n[\/crayon]<\/p>\n<p>DATA1 disk group<br \/>\n[crayon]VBoxManage createhd &#8211;filename \/Data\/VM\/Shared_Disks\/TAS04.vdi &#8211;size 2048 &#8211;format VDI &#8211;variant Fixed<br \/>\nVBoxManage createhd &#8211;filename \/Data\/VM\/Shared_Disks\/TAS05.vdi &#8211;size 2048 &#8211;format VDI &#8211;variant Fixed<br \/>\n[\/crayon]<\/p>\n<p>RA1 disk group<br \/>\n[crayon]VBoxManage createhd &#8211;filename \/Data\/VM\/Shared_Disks\/TAS06.vdi &#8211;size 2048 &#8211;format VDI &#8211;variant Fixed<br \/>\nVBoxManage createhd &#8211;filename \/Data\/VM\/Shared_Disks\/TAS07.vdi &#8211;size 2048 &#8211;format VDI &#8211;variant Fixed<br \/>\n[\/crayon]<\/p>\n<p>[crayon]VBoxManage modifyhd \/Data\/VM\/Shared_Disks\/TAS01.vdi &#8211;type shareable<br \/>\nVBoxManage modifyhd \/Data\/VM\/Shared_Disks\/TAS02.vdi &#8211;type shareable<br \/>\nVBoxManage modifyhd \/Data\/VM\/Shared_Disks\/TAS03.vdi &#8211;type shareable<br \/>\nVBoxManage modifyhd \/Data\/VM\/Shared_Disks\/TAS04.vdi &#8211;type shareable<br \/>\nVBoxManage modifyhd \/Data\/VM\/Shared_Disks\/TAS05.vdi &#8211;type shareable<br \/>\nVBoxManage modifyhd \/Data\/VM\/Shared_Disks\/TAS06.vdi &#8211;type shareable<br \/>\nVBoxManage modifyhd \/Data\/VM\/Shared_Disks\/TAS07.vdi &#8211;type shareable<br \/>\n[\/crayon]<\/p>\n<p>Adding disks to nodes<br \/>\nnode1<br \/>\n[crayon]VBoxManage storageattach node1 &#8211;storagectl &#8220;SATA&#8221; &#8211;port 3 &#8211;device 0 &#8211;type hdd &#8211;medium \/Data\/VM\/Shared_Disks\/TAS01.vdi &#8211;mtype shareable<br \/>\nVBoxManage storageattach node1 &#8211;storagectl &#8220;SATA&#8221; &#8211;port 4 &#8211;device 0 &#8211;type hdd &#8211;medium \/Data\/VM\/Shared_Disks\/TAS02.vdi &#8211;mtype shareable<br \/>\nVBoxManage storageattach node1 &#8211;storagectl &#8220;SATA&#8221; &#8211;port 5 &#8211;device 0 &#8211;type hdd &#8211;medium \/Data\/VM\/Shared_Disks\/TAS03.vdi &#8211;mtype shareable<br \/>\nVBoxManage storageattach node1 &#8211;storagectl &#8220;SATA&#8221; &#8211;port 6 &#8211;device 0 &#8211;type hdd &#8211;medium \/Data\/VM\/Shared_Disks\/TAS04.vdi &#8211;mtype shareable<br \/>\nVBoxManage storageattach node1 &#8211;storagectl &#8220;SATA&#8221; &#8211;port 7 &#8211;device 0 &#8211;type hdd &#8211;medium \/Data\/VM\/Shared_Disks\/TAS05.vdi &#8211;mtype shareable<br \/>\nVBoxManage storageattach node1 &#8211;storagectl &#8220;SATA&#8221; &#8211;port 8 &#8211;device 0 &#8211;type hdd &#8211;medium \/Data\/VM\/Shared_Disks\/TAS06.vdi &#8211;mtype shareable<br \/>\nVBoxManage storageattach node1 &#8211;storagectl &#8220;SATA&#8221; &#8211;port 9 &#8211;device 0 &#8211;type hdd &#8211;medium \/Data\/VM\/Shared_Disks\/TAS07.vdi &#8211;mtype shareable<br \/>\n[\/crayon]<\/p>\n<p>node2<br \/>\n[crayon]VBoxManage storageattach node2 &#8211;storagectl &#8220;SATA&#8221; &#8211;port 3 &#8211;device 0 &#8211;type hdd &#8211;medium \/Data\/VM\/Shared_Disks\/TAS01.vdi &#8211;mtype shareable<br \/>\nVBoxManage storageattach node2 &#8211;storagectl &#8220;SATA&#8221; &#8211;port 4 &#8211;device 0 &#8211;type hdd &#8211;medium \/Data\/VM\/Shared_Disks\/TAS02.vdi &#8211;mtype shareable<br \/>\nVBoxManage storageattach node2 &#8211;storagectl &#8220;SATA&#8221; &#8211;port 5 &#8211;device 0 &#8211;type hdd &#8211;medium \/Data\/VM\/Shared_Disks\/TAS03.vdi &#8211;mtype shareable<br \/>\nVBoxManage storageattach node2 &#8211;storagectl &#8220;SATA&#8221; &#8211;port 6 &#8211;device 0 &#8211;type hdd &#8211;medium \/Data\/VM\/Shared_Disks\/TAS04.vdi &#8211;mtype shareable<br \/>\nVBoxManage storageattach node2 &#8211;storagectl &#8220;SATA&#8221; &#8211;port 7 &#8211;device 0 &#8211;type hdd &#8211;medium \/Data\/VM\/Shared_Disks\/TAS05.vdi &#8211;mtype shareable<br \/>\nVBoxManage storageattach node2 &#8211;storagectl &#8220;SATA&#8221; &#8211;port 8 &#8211;device 0 &#8211;type hdd &#8211;medium \/Data\/VM\/Shared_Disks\/TAS06.vdi &#8211;mtype shareable<br \/>\nVBoxManage storageattach node2 &#8211;storagectl &#8220;SATA&#8221; &#8211;port 9 &#8211;device 0 &#8211;type hdd &#8211;medium \/Data\/VM\/Shared_Disks\/TAS07.vdi &#8211;mtype shareable<br \/>\n[\/crayon]<\/p>\n<p><strong>Step_6 configuring disks (on one node)<\/strong><br \/>\n[crayon]fdisk \/dev\/sdx<br \/>\n[\/crayon]<br \/>\non both nodes ( sometimes reboot is necessary (other than node1 (fdisk executed on node1))<\/p>\n<p>[crayon]sudo chown tibero:dba \/dev\/sdb1<br \/>\nsudo chown tibero:dba \/dev\/sdc1<br \/>\nsudo chown tibero:dba \/dev\/sdd1<br \/>\nsudo chown tibero:dba \/dev\/sde1<br \/>\nsudo chown tibero:dba \/dev\/sdf1<br \/>\nsudo chown tibero:dba \/dev\/sdg1<br \/>\nsudo chown tibero:dba \/dev\/sdh1<br \/>\n[\/crayon]<\/p>\n<p>[crayon]sudo chmod 660 \/dev\/sdb1<br \/>\nsudo chmod 660 \/dev\/sdc1<br \/>\nsudo chmod 660 \/dev\/sdd1<br \/>\nsudo chmod 660 \/dev\/sde1<br \/>\nsudo chmod 660 \/dev\/sdf1<br \/>\nsudo chmod 660 \/dev\/sdg1<br \/>\nsudo chmod 660 \/dev\/sdh1<br \/>\n[\/crayon]<\/p>\n<p><strong>Step_7 Extracting binaries (on both nodes)<\/strong><br \/>\n[crayon]tar -xvf \/Data\/SetUp\/tibero6-bin-FS02-linux64-114545-opt-20151210192436-tested.tar.gz -C \/Tibero\/tas\/<br \/>\n[\/crayon]<\/p>\n<p><strong>Step_8 Configuring .bash_profile<\/strong><br \/>\nEnd of the page there is a .bash_profile for both instances<br \/>\n<strong>Step_9 parameter files<\/strong><\/p>\n<p>node _ 1<br \/>\nvi \/Tibero\/tas\/tibero6\/config\/tas1.tip<\/p>\n<p>[crayon]DB_NAME=tas<br \/>\nLISTENER_PORT=9620<br \/>\nMAX_SESSION_COUNT=200<br \/>\nTOTAL_SHM_SIZE=512M<br \/>\nMEMORY_TARGET=1G<br \/>\nINSTANCE_TYPE=TAS<br \/>\nTAS_DISKSTRING=&#8221;\/dev\/sdb*,\/dev\/sdc*,\/dev\/sdd*,\/dev\/sde*,\/dev\/sdf*,\/dev\/sdh*,\/dev\/sdg*&#8221;<\/p>\n<p>CLUSTER_DATABASE=Y<br \/>\nLOCAL_CLUSTER_ADDR=192.168.56.21<br \/>\nLOCAL_CLUSTER_PORT=20000<br \/>\nCM_CLUSTER_MODE=ACTIVE_SHARED<br \/>\nCM_PORT=20005<br \/>\nTHREAD=0<br \/>\n[\/crayon]<\/p>\n<p>node _ 2<br \/>\nvi \/Tibero\/tas\/tibero6\/config\/tas2.tip<\/p>\n<p>[crayon]DB_NAME=tas<br \/>\nLISTENER_PORT=9620<br \/>\nMAX_SESSION_COUNT=200<br \/>\nTOTAL_SHM_SIZE=512M<br \/>\nMEMORY_TARGET=1G<br \/>\nINSTANCE_TYPE=TAS<br \/>\nTAS_DISKSTRING=&#8221;\/dev\/sdb*,\/dev\/sdc*,\/dev\/sdd*,\/dev\/sde*,\/dev\/sdf*,\/dev\/sdh*,\/dev\/sdg*&#8221;<br \/>\nCLUSTER_DATABASE=Y<br \/>\nLOCAL_CLUSTER_ADDR=192.168.56.22<br \/>\nLOCAL_CLUSTER_PORT=20000<br \/>\nCM_CLUSTER_MODE=ACTIVE_SHARED<br \/>\nCM_PORT=20005<br \/>\nTHREAD=1<br \/>\n[\/crayon]<\/p>\n<p><strong>Step_10 tbdsn.tbr (tnsnames ora)<\/strong><br \/>\nnode _ 1<br \/>\nvim \/Tibero\/tas\/tibero6\/client\/config\/tbdsn.tbr<\/p>\n<p>[crayon]tas1=(<br \/>\n(INSTANCE=(HOST=localhost)<br \/>\n(PORT=9620)<br \/>\n(DB_NAME=tas1)<br \/>\n)<br \/>\n)<br \/>\n[\/crayon]<\/p>\n<p>node _ 2<br \/>\nvim \/Tibero\/tas\/tibero6\/client\/config\/tbdsn.tbr<\/p>\n<p>[crayon]tas2=(<br \/>\n(INSTANCE=(HOST=localhost)<br \/>\n(PORT=9620)<br \/>\n(DB_NAME=tas2)<br \/>\n)<br \/>\n)<br \/>\n[\/crayon]<\/p>\n<p><strong>Step_11 license files (on both nodes)<\/strong><br \/>\nMailed from technet.tmaxsoft.com<br \/>\n[crayon]cp license.xml cp \/Tibero\/tas\/tibero6\/license\/license.xml<br \/>\n[\/crayon]<\/p>\n<p>Before running TAS instances as a cluster,<br \/>\nrun a TAS instance in the NOMOUNT mode and then<br \/>\ncreate the default disk space.<br \/>\nAfter creating the default disk space, run TBCM for clustering TAS instances.<\/p>\n<p><strong>Step_12 creating default disk space<\/strong><br \/>\nTB_SID=tas1<br \/>\n[crayon]tbboot -t nomount<br \/>\n[\/crayon]<\/p>\n<p>Default Disk Space (on one node)<br \/>\n[crayon]CREATE DISKSPACE ds0 HIGH REDUNDANCY<br \/>\nFAILGROUP fg1 DISK<br \/>\n&#8216;\/dev\/sdb1&#8217; NAME sdb1<br \/>\nFAILGROUP fg2 DISK<br \/>\n&#8216;\/dev\/sdc1&#8217; NAME sdc1<br \/>\nFAILGROUP fg3 DISK<br \/>\n&#8216;\/dev\/sdd1&#8217; NAME sdd1<br \/>\nATTRIBUTE &#8216;AU_SIZE&#8217;=&#8217;1M&#8217;;<br \/>\n[\/crayon]<\/p>\n<p><strong>Step_13 firewall and selinux should be disabled (on both nodes)<\/strong><br \/>\n[crayon]systemctl disable firewalld<br \/>\nsystemctl stop firewalld<br \/>\nsystemctl status firewalld<br \/>\n[\/crayon]<\/p>\n<p><strong>Step_14 starting up tas instance<\/strong><br \/>\nnode _ 1<br \/>\nTB_SID=tas1<br \/>\n[crayon]tbcm -c<br \/>\n# cmfile in the default disk space. &#8212;&gt; SUCCESS<\/p>\n<p>TB_SID=tas1<br \/>\ntbcm -b<br \/>\n[\/crayon]<\/p>\n<p>Check the status of cluster<br \/>\n[crayon][tibero@node1 ~]$ tbcm -s<br \/>\n======================= LOCAL STATUS ===========================<br \/>\nNODE NAME : [101] cm@192.168.56.21:20005<br \/>\nCLUSTER MODE : ACTIVE_SHARED (GUARD ON, FENCE OFF)<br \/>\nSTATUS : NODE ACTIVE<br \/>\nINCARNATION_NO : 1 (ACK 1, COMMIT 1)<br \/>\nHEARTBEAT PERIOD : 30 ticks (1 tick = 1000000 micro-sec)<br \/>\nSVC PROBE PERIOD : 10 ticks (expires 0 ticks later)<br \/>\nSVC DOWN CMD : &#8220;\/Tibero\/tas\/tibero6\/scripts\/cm_down_cmd.sh&#8221;<br \/>\nCONTROL FILE No.0 (A): +0 (512 byte\/block)[VALID]<br \/>\nCONTROL FILE No.1 (A): +1 (512 byte\/block)[VALID]<br \/>\nCONTROL FILE No.2 (A): +2 (512 byte\/block)[VALID]<br \/>\nCONTROL FILE EXPIRE: 30 ticks later<br \/>\nLOG LEVEL : 2<br \/>\n======================= CLUSTER STATUS =========================<br \/>\nINCARNATION_NO : 1 (COMMIT 1)<br \/>\nFILE HEADER SIZE : 1024 bytes ( 512 byte-block )<br \/>\n# of NODES : 1 nodes (LAST_ID = 101)<br \/>\nMASTER NODE : [101] cm@192.168.56.21:20005<br \/>\nMEMBERSHIP : AUTO (SPLIT)<br \/>\nNODE LIST&#8230; (R:role, Scd:scheduled, F\/O: index of VIP failover node)<br \/>\nH\/B F\/O VIP<br \/>\nIdx R Scd Node Status offset Idx Alias ID Name<br \/>\n&#8212; &#8211; &#8212; &#8212;&#8212;&#8212;&#8212;&#8212; &#8212;&#8212; &#8212; &#8212;&#8212; &#8212; &#8212;&#8212;&#8212;&#8212;&#8212;<br \/>\n#0 M ON NODE ACTIVE 1024 N\/A N\/A 101 cm@192.168.56.21:20005<br \/>\n===================== OTHER NODE STATUS ========================<br \/>\n[\/crayon]<\/p>\n<p>[crayon]tbboot<br \/>\ntbsql sys\/tibero<br \/>\nALTER DISKSPACE ds0 ADD THREAD 1;<br \/>\n[\/crayon]<\/p>\n<p>node _ 2<br \/>\n[crayon]TB_SID=tas2<br \/>\ntbcm -b<br \/>\ntbboot<br \/>\n[\/crayon]<\/p>\n<p>checking node 2<br \/>\n[crayon][tibero@node2 log]$ tbcm -s<br \/>\n======================= LOCAL STATUS ===========================<br \/>\nNODE NAME : [102] cm@192.168.56.22:20005<br \/>\nCLUSTER MODE : ACTIVE_SHARED (GUARD ON, FENCE OFF)<br \/>\nSTATUS : SERVICE ACTIVE<br \/>\nINCARNATION_NO : 4 (ACK 4, COMMIT 4)<br \/>\nHEARTBEAT PERIOD : 30 ticks (1 tick = 1000000 micro-sec)<br \/>\nSVC PROBE PERIOD : 10 ticks (expires 10 ticks later)<br \/>\nSVC DOWN CMD : &#8220;\/Tibero\/tas\/tibero6\/scripts\/cm_down_cmd.sh&#8221;<br \/>\nCONTROL FILE No.0 (A): +0 (512 byte\/block)[VALID]<br \/>\nCONTROL FILE No.1 (A): +1 (512 byte\/block)[VALID]<br \/>\nCONTROL FILE No.2 (A): +2 (512 byte\/block)[VALID]<br \/>\nCONTROL FILE EXPIRE: 29 ticks later<br \/>\nLOG LEVEL : 2<br \/>\n======================= CLUSTER STATUS =========================<br \/>\nINCARNATION_NO : 4 (COMMIT 4)<br \/>\nFILE HEADER SIZE : 1024 bytes ( 512 byte-block )<br \/>\n# of NODES : 2 nodes (LAST_ID = 102)<br \/>\nMASTER NODE : [101] cm@192.168.56.21:20005<br \/>\nMEMBERSHIP : AUTO (SPLIT)<br \/>\nNODE LIST&#8230; (R:role, Scd:scheduled, F\/O: index of VIP failover node)<br \/>\nH\/B F\/O VIP<br \/>\nIdx R Scd Node Status offset Idx Alias ID Name<br \/>\n&#8212; &#8211; &#8212; &#8212;&#8212;&#8212;&#8212;&#8212; &#8212;&#8212; &#8212; &#8212;&#8212; &#8212; &#8212;&#8212;&#8212;&#8212;&#8212;<br \/>\n#0 M ON SERVICE ACTIVE 1024 N\/A N\/A 101 cm@192.168.56.21:20005<br \/>\n#1 S ON SERVICE ACTIVE 1536 N\/A N\/A 102 cm@192.168.56.22:20005<br \/>\n===================== OTHER NODE STATUS ========================<br \/>\nSEQ (NAME) : #0 ([101] cm@192.168.56.21:20005)<br \/>\nSTATUS (CONN.) : SERVICE ACTIVE (CONNECTED)<br \/>\nNET ADDR (PORT) : 192.168.56.21 (20005)<br \/>\n[\/crayon]<\/p>\n<p><strong>Step_15 creating data disk groups (on one node)<\/strong><br \/>\n[crayon]CREATE DISKSPACE DATA1 NORMAL REDUNDANCY<br \/>\nFAILGROUP fg1 DISK<br \/>\n&#8216;\/dev\/sde1&#8217; NAME sde1<br \/>\nFAILGROUP fg2 DISK<br \/>\n&#8216;\/dev\/sdf1&#8217; NAME sdf1<br \/>\nATTRIBUTE &#8216;AU_SIZE&#8217;=&#8217;1M&#8217;;<br \/>\n[\/crayon]<\/p>\n<p>[crayon]CREATE DISKSPACE RA1 NORMAL REDUNDANCY<br \/>\nFAILGROUP fg1 DISK<br \/>\n&#8216;\/dev\/sdg1&#8217; NAME sdg1<br \/>\nFAILGROUP fg2 DISK<br \/>\n&#8216;\/dev\/sdh1&#8217; NAME sdh1<br \/>\nATTRIBUTE &#8216;AU_SIZE&#8217;=&#8217;1M&#8217;;<br \/>\n[\/crayon]<\/p>\n<p>log files<br \/>\n[crayon]\/Tibero\/tas\/tibero6\/instance\/as0\/log\/cm\/cmd.log<br \/>\n\/Tibero\/tas\/tibero6\/instance\/as0\/log\/cm\/trace_list.log<br \/>\n[\/crayon]<\/p>\n<p>TAS Monitoring<br \/>\n[crayon]select * from V$TAS_ALIAS;<br \/>\nselect * from V$TAS_DISKSPACE;<br \/>\nselect * from V$TAS_DISK_STAT;<br \/>\nselect * from V$TAS_DISKSPACE_STAT;<br \/>\nselect * from V$TAS_FILE;<br \/>\nselect * from V$TAS_DISK;<br \/>\n[\/crayon]<\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-size: 14pt;\"><strong>Tibero Active Cluster Steps<\/strong><\/span><br \/>\n<strong>Step_16 extracting binaries (on all nodes)<\/strong><br \/>\n[crayon]tar -xvf \/Data\/SetUp\/tibero6-bin-FS02-linux64-114545-opt-20151210192436-tested.tar.gz -C \/Tibero\/tac\/<br \/>\nchmod -Rf g+rwx \/Tibero\/tac<br \/>\nsh \/Tibero\/tac\/tibero6\/config\/gen_tip.sh<br \/>\n[\/crayon]<\/p>\n<p><strong>Step_17 tac parameter file<\/strong><br \/>\nnode_1<br \/>\nvim \/Tibero\/tac\/tibero6\/config\/tac1.tip<\/p>\n<p>[crayon]DB_NAME=tbrdb<br \/>\nLISTENER_PORT=8620<br \/>\nCONTROL_FILES=&#8221;+DATA1\/c1.ctl&#8221;<br \/>\nUSE_TAS=Y<br \/>\nTAS_PORT=9620<br \/>\nDB_CREATE_FILE_DEST=&#8221;+DATA1&#8243;<br \/>\nLOG_ARCHIVE_DEST=&#8221;+RA1&#8243;<br \/>\nMAX_SESSION_COUNT=20<br \/>\nTOTAL_SHM_SIZE=512M<br \/>\nMEMORY_TARGET=1G<br \/>\nCLUSTER_DATABASE=Y<br \/>\nLOCAL_CLUSTER_ADDR=192.168.56.21<br \/>\nLOCAL_CLUSTER_PORT=8625<br \/>\nTHREAD=0<br \/>\nUNDO_TABLESPACE=UNDO0<br \/>\nCM_CLUSTER_MODE=ACTIVE_SHARED<br \/>\nCM_FILE_NAME=&#8221;+DS0\/tbcm&#8221;<br \/>\nCM_PORT=8630<br \/>\nCM_HEARTBEAT_EXPIRE=15<br \/>\nCM_WATCHDOG_EXPIRE=10<br \/>\nCM_NET_EXPIRE_MARGIN=5<br \/>\n[\/crayon]<\/p>\n<p>node_2<br \/>\nvim \/Tibero\/tac\/tibero6\/config\/tac2.tip<\/p>\n<p>[crayon]DB_NAME=tbrdb<br \/>\nLISTENER_PORT=8620<br \/>\nCONTROL_FILES=&#8221;+DATA1\/c1.ctl&#8221;<br \/>\nUSE_TAS=Y<br \/>\nTAS_PORT=9620<br \/>\nDB_CREATE_FILE_DEST=&#8221;+DATA1&#8243;<br \/>\nLOG_ARCHIVE_DEST=&#8221;+RA1&#8243;<br \/>\nMAX_SESSION_COUNT=20<br \/>\nTOTAL_SHM_SIZE=512M<br \/>\nMEMORY_TARGET=1G<br \/>\nCLUSTER_DATABASE=Y<br \/>\nLOCAL_CLUSTER_ADDR=192.168.56.22<br \/>\nLOCAL_CLUSTER_PORT=8625<br \/>\nTHREAD=1<br \/>\nUNDO_TABLESPACE=UNDO1<br \/>\nCM_CLUSTER_MODE=ACTIVE_SHARED<br \/>\nCM_FILE_NAME=&#8221;+DS0\/tbcm&#8221;<br \/>\nCM_PORT=8630<br \/>\nCM_HEARTBEAT_EXPIRE=15<br \/>\nCM_WATCHDOG_EXPIRE=10<br \/>\nCM_NET_EXPIRE_MARGIN=5<br \/>\n[\/crayon]<\/p>\n<p><strong>Step_18 tbdsn.tbr (tnsname.ora)<\/strong><br \/>\nNode _ 1<br \/>\nvim \/Tibero\/tibero6\/client\/config\/tbdsn.tbr<\/p>\n<p>[crayon]tac1=(<br \/>\n(INSTANCE=(HOST=localhost)<br \/>\n(PORT=8620)<br \/>\n(DB_NAME=tbrdb)<br \/>\n)<br \/>\n)<br \/>\n[\/crayon]<\/p>\n<p>node _ 2<br \/>\n[crayon]tac2=(<br \/>\n(INSTANCE=(HOST=localhost)<br \/>\n(PORT=8620)<br \/>\n(DB_NAME=tbrdb)<br \/>\n)<br \/>\n)<br \/>\n[\/crayon]<\/p>\n<p><strong>Step_19 license file (on all nodes)<\/strong><br \/>\n[crayon]cp \/Tibero\/tas\/tibero6\/license\/license.xml \/Tibero\/tac\/tibero6\/license\/<br \/>\n[\/crayon]<\/p>\n<p><strong>Step_20 starting up tac instance<\/strong><br \/>\non node _ 1<\/p>\n<p>[crayon]tbcm \u2013c &#8212;-&gt; success<br \/>\ntbcm \u2013b LOCK<br \/>\ntbboot -t nomount \u2013c<br \/>\ntbsql sys\/tibero<br \/>\n[\/crayon]<\/p>\n<p><strong>Step_21 creating database<\/strong><br \/>\n[crayon]create database<br \/>\ncharacter set WE8ISO8859P9<br \/>\nlogfile group 1 size 20m,<br \/>\ngroup 2 size 20m,<br \/>\ngroup 3 size 20m<br \/>\narchivelog<br \/>\ndatafile &#8216;system001.dtf&#8217; size 100m<br \/>\nautoextend on next 10m<br \/>\ndefault temporary tablespace temp<br \/>\ntempfile &#8216;temp001.dtf&#8217; size 100m<br \/>\nautoextend on next 10m<br \/>\nundo tablespace undo0<br \/>\ndatafile &#8216;undo001.dtf&#8217; size 200m<br \/>\nautoextend on next 10m<br \/>\ndefault tablespace usr<br \/>\ndatafile &#8216;usr001.dtf&#8217; size 100m<br \/>\nautoextend on next 10m<br \/>\n[\/crayon]<\/p>\n<p>[crayon]tbboot<br \/>\n[\/crayon]<\/p>\n<p><strong>Step_22 creating undo01 and redologs (on node one)<\/strong><br \/>\n[crayon]tbsql sys\/tibero<br \/>\nCREATE UNDO TABLESPACE UNDO1<br \/>\nDATAFILE &#8216;undo011.dtf&#8217; SIZE 200M<br \/>\nAUTOEXTEND ON NEXT 50M ;<br \/>\nALTER DATABASE ADD LOGFILE THREAD 1 GROUP 4 size 20M;<br \/>\nALTER DATABASE ADD LOGFILE THREAD 1 GROUP 5 size 20M;<br \/>\nALTER DATABASE ADD LOGFILE THREAD 1 GROUP 6 size 20M;<br \/>\nALTER DATABASE ENABLE PUBLIC THREAD 1;<br \/>\n[\/crayon]<\/p>\n<p><strong>Step_23 executing system.sh (on node one)<\/strong><br \/>\n[crayon]$TB_HOME\/scripts\/system.sh<br \/>\nsys &#8211;&gt; tibero<br \/>\nsyscat &#8211;&gt; syscat<br \/>\n[\/crayon]<\/p>\n<p><strong>Step_24 checking what&#8217;s happened<\/strong><br \/>\n[crayon]cat \/Tibero\/tibero6\/instance\/tac1\/log\/system_init.log<br \/>\n[\/crayon]<\/p>\n<p>[crayon]col member for a30<br \/>\nSelect * from v$logfile;<br \/>\nGROUP# STATUS TYPE MEMBER<br \/>\n&#8212;&#8212;&#8212;- &#8212;&#8212;- &#8212;&#8212; &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;<br \/>\n1 ONLINE +DATA1\/log001.log<br \/>\n2 ONLINE +DATA1\/log002.log<br \/>\n3 ONLINE +DATA1\/log003.log<br \/>\n4 ONLINE +DATA1\/log004.log<br \/>\n5 ONLINE +DATA1\/log005.log<br \/>\n6 ONLINE +DATA1\/log006.log<\/p>\n<p>6 rows selected.<br \/>\n[\/crayon]<\/p>\n<p>[crayon]col db_name for a10<br \/>\nselect instance_name, db_name,version, status from v$instance;<br \/>\nINSTANCE_NAME DB_NAME VERSION STATUS<br \/>\n&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;<br \/>\ntac1 tbrdb 6 NORMAL<br \/>\n[\/crayon]<\/p>\n<p>[crayon]col owner for a10<br \/>\nselect owner from dba_objects where object_name =&#8217;DATABASE_PROPERTIES&#8217;;<br \/>\nOWNER<br \/>\n&#8212;&#8212;&#8212;-<br \/>\nPUBLIC<br \/>\nSYSCAT<br \/>\n2 rows selected.<br \/>\n[\/crayon]<\/p>\n<p>[crayon]set lin 200<br \/>\ncol name for a30<br \/>\ncol value for a15<br \/>\ncol comment_str for a40<br \/>\nselect * from database_properties;<br \/>\nNAME VALUE COMMENT_STR<br \/>\n&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;<br \/>\nDFLT_PERM_TS USR Name of default permanent tablespace<br \/>\nDFLT_TEMP_TS TEMP Name of default temporary tablespace<br \/>\nDFLT_UNDO_TS UNDO0 Name of default undo tablespace<br \/>\nNLS_CHARACTERSET WE8ISO8859P9<br \/>\nNLS_NCHAR_CHARACTERSET UTF16<br \/>\nDB_NAME tbrdb database name<br \/>\n[\/crayon]<\/p>\n<p><strong>Step_24 starting node2<\/strong><br \/>\nnode _ 1<br \/>\n[crayon]tbdown<br \/>\ntbcm -d<\/p>\n<p>tbcm -b<br \/>\ntbboot<br \/>\n[\/crayon]<\/p>\n<p>node _ 2<br \/>\n[crayon]tbcm \u2013b<br \/>\ntbboot<br \/>\n[\/crayon]<\/p>\n<p>[crayon][tibero@node2 ~]$ tbcm -s<br \/>\n======================= LOCAL STATUS ===========================<br \/>\nNODE NAME : [102] cm@192.168.56.22:8630<br \/>\nCLUSTER MODE : ACTIVE_SHARED (GUARD ON, FENCE OFF)<br \/>\nSTATUS : SERVICE ACTIVE<br \/>\nINCARNATION_NO : 4 (ACK 4, COMMIT 4)<br \/>\nHEARTBEAT PERIOD : 15 ticks (1 tick = 1000000 micro-sec)<br \/>\nSVC PROBE PERIOD : 10 ticks (expires 10 ticks later)<br \/>\nSVC DOWN CMD : &#8220;\/Tibero\/tac\/tibero6\/scripts\/cm_down_cmd.sh&#8221;<br \/>\nCONTROL FILE (A) : +DS0\/tbcm (512 byte\/block)<br \/>\nCONTROL FILE EXPIRE: 14 ticks later<br \/>\nLOG LEVEL : 2<br \/>\n======================= CLUSTER STATUS =========================<br \/>\nINCARNATION_NO : 4 (COMMIT 4)<br \/>\nFILE HEADER SIZE : 1024 bytes ( 512 byte-block )<br \/>\n# of NODES : 2 nodes (LAST_ID = 102)<br \/>\nMASTER NODE : [101] cm@192.168.56.21:8630<br \/>\nMEMBERSHIP : AUTO (SPLIT)<br \/>\nNODE LIST&#8230; (R:role, Scd:scheduled, F\/O: index of VIP failover node)<br \/>\nH\/B F\/O VIP<br \/>\nIdx R Scd Node Status offset Idx Alias ID Name<br \/>\n&#8212; &#8211; &#8212; &#8212;&#8212;&#8212;&#8212;&#8212; &#8212;&#8212; &#8212; &#8212;&#8212; &#8212; &#8212;&#8212;&#8212;&#8212;&#8212;<br \/>\n#0 M ON SERVICE ACTIVE 1024 N\/A N\/A 101 cm@192.168.56.21:8630<br \/>\n#1 S ON SERVICE ACTIVE 1536 N\/A N\/A 102 cm@192.168.56.22:8630<br \/>\n===================== OTHER NODE STATUS ========================<br \/>\nSEQ (NAME) : #0 ([101] cm@192.168.56.21:8630)<br \/>\nSTATUS (CONN.) : SERVICE ACTIVE (CONNECTED)<br \/>\nNET ADDR (PORT) : 192.168.56.21 (8630)<br \/>\n[\/crayon]<br \/>\nFailover Tests<\/p>\n<p>\/Tibero\/tibero6\/client\/config\/tbdsn.tbr<\/p>\n<p>[crayon]tac =(<br \/>\n(INSTANCE=(HOST=192.168.10.23)<br \/>\n(PORT=8620)<br \/>\n(DB_NAME=tbrdb)<br \/>\n)<\/p>\n<p>(INSTANCE=(HOST=192.168.10.24)<br \/>\n(PORT=8620)<br \/>\n(DB_NAME=tbrdb)<br \/>\n)<br \/>\n(LOAD_BALANCE=Y)<br \/>\n(USE_FAILOVER=Y)<br \/>\n)<br \/>\n[\/crayon]<\/p>\n<p>Profiles<br \/>\n####################################################<br \/>\n####################################################<br \/>\n[crayon]<br \/>\n# .bash_profile<br \/>\n# Get the aliases and functions<\/p>\n<p>if [ -f ~\/.bashrc ]; then<br \/>\n. ~\/.bashrc<br \/>\nfi<\/p>\n<p># User specific environment and startup programs<\/p>\n<p>PATH=$PATH:$HOME\/bin:\/sbin<\/p>\n<p>export PATH<br \/>\n####################################################<br \/>\n####################################################<\/p>\n<p>echo &#8220;Enter your Choice&#8221;<br \/>\necho &#8220;1) Tibero TAC &#8211; node1&#8221;<br \/>\necho &#8220;2) Tibero TAC &#8211; tas1&#8221;<br \/>\nread ans<\/p>\n<p>if [ $ans -eq &#8216;1&#8217; ]; then<br \/>\n######## TIBERO ENV ########<br \/>\nexport TB_HOME=\/Tibero\/tac\/tibero6<br \/>\nexport TB_SID=node1<br \/>\nexport TB_PROF_DIR=$TB_HOME\/bin\/prof<br \/>\nexport PATH=.:$TB_HOME\/bin:$TB_HOME\/client\/bin:~\/tbinary\/monitor:$PATH<br \/>\nexport LD_LIBRARY_PATH=$TB_HOME\/lib:$TB_HOME\/client\/lib:$LD_LIBRARY_PATH<br \/>\nexport SHLIB_PATH=$LD_LIBRARY_PATH:$SHLIB_PATH<br \/>\nexport LIBPATH=$LD_LIBRARY_PATH:$LIBPATH<br \/>\n######## TIBERO alias ########<br \/>\nalias tbhome=&#8217;cd $TB_HOME&#8217;<br \/>\nalias tbbin=&#8217;cd $TB_HOME\/bin&#8217;<br \/>\nalias tblog=&#8217;cd $TB_HOME\/instance\/$TB_SID\/log&#8217;<br \/>\nalias tbcfg=&#8217;cd $TB_HOME\/config&#8217;<br \/>\nalias tbcfgv=&#8217;vi $TB_HOME\/config\/$TB_SID.tip&#8217;<br \/>\nalias tbcli=&#8217;cd ${TB_HOME}\/client\/config&#8217;<br \/>\nalias tbcliv=&#8217;vi ${TB_HOME}\/client\/config\/tbdsn.tbr&#8217;<br \/>\n#alias tbcliv=&#8217;vi ${TB_HOME}\/client\/config\/tbnet_alias.tbr&#8217;alias tbi=&#8217;cd ~\/tbinary&#8217;<br \/>\n#alias clean=&#8217;tbdown clean&#8217;<br \/>\n#alias dba=&#8217;tbsql sys\/tibero&#8217;<br \/>\nalias tm=&#8217;cd ~\/tbinary\/monitor;monitor;cd -&#8216;<br \/>\n#alias tbdata=&#8217;cd $TB_HOME\/tbdata&#8217;<br \/>\n####################################################<br \/>\n####################################################<br \/>\nelif [ $ans -eq &#8216;2&#8217; ]; then<\/p>\n<p>######## TIBERO ENV ########<br \/>\nexport TB_HOME=\/Tibero\/tas\/tibero6<br \/>\nexport TB_SID=tas1<br \/>\nexport TB_PROF_DIR=$TB_HOME\/bin\/prof<br \/>\nexport PATH=.:$TB_HOME\/bin:$TB_HOME\/client\/bin:~\/tbinary\/monitor:$PATH<br \/>\nexport LD_LIBRARY_PATH=$TB_HOME\/lib:$TB_HOME\/client\/lib:$LD_LIBRARY_PATH<br \/>\nexport SHLIB_PATH=$LD_LIBRARY_PATH:$SHLIB_PATH<br \/>\nexport LIBPATH=$LD_LIBRARY_PATH:$LIBPATH<br \/>\n######## TIBERO alias ########<br \/>\nalias tbhome=&#8217;cd $TB_HOME&#8217;<br \/>\nalias tbbin=&#8217;cd $TB_HOME\/bin&#8217;<br \/>\nalias tblog=&#8217;cd $TB_HOME\/instance\/$TB_SID\/log&#8217;<br \/>\nalias tbcfg=&#8217;cd $TB_HOME\/config&#8217;<br \/>\nalias tbcfgv=&#8217;vi $TB_HOME\/config\/$TB_SID.tip&#8217;<br \/>\nalias tbcli=&#8217;cd ${TB_HOME}\/client\/config&#8217;<br \/>\nalias tbcliv=&#8217;vi ${TB_HOME}\/client\/config\/tbdsn.tbr&#8217;<br \/>\n#alias tbcliv=&#8217;vi ${TB_HOME}\/client\/config\/tbnet_alias.tbr&#8217;alias tbi=&#8217;cd ~\/tbinary&#8217;<br \/>\n#alias clean=&#8217;tbdown clean&#8217;<br \/>\n#alias dba=&#8217;tbsql sys\/tibero&#8217;<br \/>\nalias tm=&#8217;cd ~\/tbinary\/monitor;monitor;cd -&#8216;<br \/>\n#alias tbdata=&#8217;cd $TB_HOME\/tbdata&#8217;<br \/>\nelse<br \/>\necho &#8220;Tibero Environment Not Set ! ! !&#8221;<br \/>\nfi<br \/>\n####################################################<br \/>\n####################################################<br \/>\n[\/crayon]<\/p>\n<p>Log<br \/>\n[crayon]alias tracelog=&#8221;tail -f ${TB_HOME}\/instance\/tas1\/log\/tracelog\/trace.log&#8221;<br \/>\nalias dbmslog=&#8221;tail -f ${TB_HOME}\/instance\/tas1\/log\/dbmslog\/dbms.log&#8221;<br \/>\nalias listenerlog=&#8221;tail -f ${TB_HOME}\/instance\/tas1\/log\/lsnr\/trace_list.log&#8221;<br \/>\nalias tracelistlog=&#8221;tail -f ${TB_HOME}\/instance\/tas1\/log\/cm\/trace_list.log&#8221;<br \/>\nalias th=&#8217;cd $TB_HOME&#8217;<\/p>\n<p>[crayon]alias tbbin=&#8221;cd ${TB_HOME}\/bin&#8221;<br \/>\nalias tblog=&#8221;cd ${TB_HOME}\/instance\/$TB_SID\/log&#8221;<br \/>\nalias tbcfg=&#8221;cd ${TB_HOME}\/config&#8221;<br \/>\nalias tbcfgv=&#8221;vi ${TB_HOME}\/config\/$TB_SID.tip&#8221;<br \/>\nalias tbcli=&#8221;cd ${TB_HOME}\/client\/config&#8221;<br \/>\nalias tbcliv=&#8221;vi ${TB_HOME}\/client\/config\/tbdsn.tbr&#8221;<br \/>\n#alias tbcliv=&#8217;vi ${TB_HOME}\/client\/config\/tbnet_alias.tbr&#8217; #alias tbdata=&#8217;cd $TB_HOME\/tbdata&#8217;<br \/>\nalias tbi=&#8217;cd ~\/tbinary&#8217;<br \/>\nalias clean=&#8217;tbdown clean&#8217;<br \/>\nalias dba=&#8217;tbsql sys\/tibero&#8217;<br \/>\nalias tbps=&#8221;ps -ef |grep tbs&#8221;<\/p>\n<p>##########################<br \/>\nalias c=&#8217;clear&#8217;<br \/>\nalias ls=&#8217;ls -h &#8211;color&#8217;<br \/>\nalias lx=&#8217;ls -lXB&#8217; # Sort by extension.<br \/>\nalias lk=&#8217;ls -lSr&#8217; # Sort by size, biggest last.<br \/>\nalias lt=&#8217;ls -ltr&#8217; # Sort by date, most recent last.<br \/>\nalias lc=&#8217;ls -ltcr&#8217; # Sort by\/show change time,most recent last.<br \/>\nalias lu=&#8217;ls -ltur&#8217; # Sort by\/show access time,most recent last.<\/p>\n<p># The ubiquitous &#8216;ll&#8217;: directories first, with alphanumeric sorting:<br \/>\nalias ll=&#8221;ls -lv &#8211;group-directories-first&#8221;<br \/>\nalias lm=&#8217;ll |more&#8217; # Pipe through &#8216;more&#8217;<br \/>\nalias lr=&#8217;ll -R&#8217; # Recursive ls.<br \/>\nalias la=&#8217;ll -A&#8217; # Show hidden files.<br \/>\nalias tree=&#8217;tree -Csuh&#8217; # Nice alternative to &#8216;recursive ls&#8217; &#8230;<br \/>\n[\/crayon]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Finally another RDBMS solution is supporting active active cluster like Oracle&#8217;s RAC. <\/p>\n","protected":false},"author":1,"featured_media":2731,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[352,338],"tags":[345],"class_list":["post-2728","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-high-availability","category-oracle-tr","tag-oracle"],"_links":{"self":[{"href":"https:\/\/sysdba.org\/en\/wp-json\/wp\/v2\/posts\/2728","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sysdba.org\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sysdba.org\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sysdba.org\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/sysdba.org\/en\/wp-json\/wp\/v2\/comments?post=2728"}],"version-history":[{"count":0,"href":"https:\/\/sysdba.org\/en\/wp-json\/wp\/v2\/posts\/2728\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/sysdba.org\/en\/wp-json\/"}],"wp:attachment":[{"href":"https:\/\/sysdba.org\/en\/wp-json\/wp\/v2\/media?parent=2728"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sysdba.org\/en\/wp-json\/wp\/v2\/categories?post=2728"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sysdba.org\/en\/wp-json\/wp\/v2\/tags?post=2728"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}