Feed aggregator

REGEXP_LIKE - Pattern match with complex logic

Tom Kyte - 56 min 24 sec ago
I want to do the regexpr pattern matching and I couldn't. Please help. I want to evaluate the value that is going to be inserted in DB. I want to perform a check as below. Only four special characters and alphabets are allowed. other special ch...
Categories: DBA Blogs

The relationship between null and 0

Tom Kyte - 56 min 24 sec ago
I asked you one last time and I will ask you one more question. If !=0, we know we should include null, but the result is not null. If !=0, it is different from zero and null, should not it also contain null if it is not 0?
Categories: DBA Blogs

Select from a table where a key value has matching link for all key values of another table

Tom Kyte - 56 min 24 sec ago
I have a set of three tables (T1, T2, T3) that represent dictionary data from external sources. I need to match the data from table 1 with the data in table 3 where <b><i>all</i></b> the rows in table 1 for a given OR_ID are reflected in table 3. T...
Categories: DBA Blogs

Update statement to flag rows

Tom Kyte - 56 min 24 sec ago
Hello, Ask Tom team. I'm using the query below to load rows to a destination database based on some conditions. After this is done I want to flag those rows in order to exclude them in the next SSIS ETL run. <code>select t1.invoice_sender,t1.ei...
Categories: DBA Blogs

latch undo global data

Tom Kyte - 56 min 24 sec ago
Hi team, I see spikes in oem for wait event latch undo global data . This is on insert statement , having concurrency of 50 Inserts in one second Due to heavily loaded db ash report takes high time It would be helpful if you share some s...
Categories: DBA Blogs

bound variables

Tom Kyte - 56 min 24 sec ago
I would like to know more about sql injection. Why is it so hard to tell to the Oracle that a certain string is a parameter and not a part of a Sql command? For example, can a person call himself Delete and his name can not be used in a search? And i...
Categories: DBA Blogs

unable to connect using database link

Tom Kyte - 56 min 24 sec ago
DEAR TOM, I CREATED A DATABASE LINK ON MY LOCAL DATABASE USING THE FOLLOWING COMMANDS. SQL> CREATE DATABASE LINK RP 2 CONNECT TO PRINCE 3 IDENTIFIED BY PRINCE 4 USING 'ORB'; Database link created. SQL> SELECT COUNT(*) FROM DUAL@...
Categories: DBA Blogs

Using awk to remove attributes without values

Michael Dinh - Mon, 2019-02-18 20:41

Attributes without values are displayed.

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.cvu.type" -p
NAME=ora.cvu
TYPE=ora.cvu.type
ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--
ACTIONS=
ACTION_SCRIPT=
ACTION_TIMEOUT=60
ACTIVE_PLACEMENT=0
AGENT_FILENAME=%CRS_HOME%/bin/orajagent
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=60
CHECK_RESULTS=1122099754
CHECK_TIMEOUT=600
CLEAN_TIMEOUT=60
CRSHOME_SPACE_ALERT_STATE=OFF
CSS_CRITICAL=no
CV_DESTLOC=
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle CVU resource
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
GEN_NEXT_CHECK_TIME=1550563672
GEN_RUNNING_NODE=racnode-dc1-2
HOSTING_MEMBERS=
IGNORE_TARGET_ON_FAILURE=no
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NEXT_CHECK_TIME=
NLS_LANG=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=restricted
RELOCATE_BY_DEPENDENCY=1
RELOCATE_KIND=offline
RESOURCE_GROUP=
RESTART_ATTEMPTS=5
RESTART_DELAY=0
RUN_INTERVAL=21600
SCRIPT_TIMEOUT=30
SERVER_CATEGORY=ora.hub.category
SERVER_POOLS=*
START_CONCURRENCY=0
START_DEPENDENCIES=hard(ora.net1.network) pullup(ora.net1.network)
START_TIMEOUT=0
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(intermediate:ora.net1.network)
STOP_TIMEOUT=0
TARGET_DEFAULT=default
TYPE_VERSION=1.1
UPTIME_THRESHOLD=1h
USER_WORKLOAD=no
USE_STICKINESS=0
USR_ORA_ENV=
WORKLOAD_CPU=0
WORKLOAD_CPU_CAP=0
WORKLOAD_MEMORY_MAX=0
WORKLOAD_MEMORY_TARGET=0

[oracle@racnode-dc1-1 ~]$

Attributes without values are NOT displayed.

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.cvu.type" -p|awk -F'=' '$2'
NAME=ora.cvu
TYPE=ora.cvu.type
ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--
ACTION_TIMEOUT=60
AGENT_FILENAME=%CRS_HOME%/bin/orajagent
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=60
CHECK_RESULTS=1122099754
CHECK_TIMEOUT=600
CLEAN_TIMEOUT=60
CRSHOME_SPACE_ALERT_STATE=OFF
CSS_CRITICAL=no
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle CVU resource
ENABLED=1
GEN_NEXT_CHECK_TIME=1550563672
GEN_RUNNING_NODE=racnode-dc1-2
IGNORE_TARGET_ON_FAILURE=no
INSTANCE_FAILOVER=1
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
PLACEMENT=restricted
RELOCATE_BY_DEPENDENCY=1
RELOCATE_KIND=offline
RESTART_ATTEMPTS=5
RUN_INTERVAL=21600
SCRIPT_TIMEOUT=30
SERVER_CATEGORY=ora.hub.category
SERVER_POOLS=*
START_DEPENDENCIES=hard(ora.net1.network) pullup(ora.net1.network)
STOP_DEPENDENCIES=hard(intermediate:ora.net1.network)
TARGET_DEFAULT=default
TYPE_VERSION=1.1
UPTIME_THRESHOLD=1h
USER_WORKLOAD=no
[oracle@racnode-dc1-1 ~]$

You might ask why I am doing this.

I am reviewing configuration before implementation and will compare them with after implementation.

The less items I need to look at before implementation, the better.

Performance issue with data densification process

Tom Kyte - Mon, 2019-02-18 11:26
Hi Tom, I am facing an issue while making sparse data dense. Here is the problem statement: We are getting price information for securities from upstream in a file and prices will come only when either there will be new security on boarded or t...
Categories: DBA Blogs

The relationship between null and 0

Tom Kyte - Mon, 2019-02-18 11:26
Coding 1) <code>select comm from emp where comm is not null and comm != 0;</code> Coding 2) <code>select comm from emp where comm != 0;</code> The results of Coding 1 and Coding 2 are the same. I know that the values ??of null and 0 ar...
Categories: DBA Blogs

Table and Index maintenance

Tom Kyte - Mon, 2019-02-18 11:26
Good Afternoon Tom, I'm going to develop a little SQL Tool for maintenance of compress tables and indexes for our DWH Schema, our clients have Oracle EE (11.2 and 12.2), my "big" doubt is use or not use parallel execution because i see that using ...
Categories: DBA Blogs

writing a stand-alone application to continuously monitor a database queue (AQ)

Tom Kyte - Mon, 2019-02-18 11:26
Hi Tom, A question regarding oracle AQ... I wish to write a small stand-alone application that would *constantly* monitor a queue (only one queue) for the arrival of a message and as soon as a mesage arrives, take some action. I figured I could use...
Categories: DBA Blogs

Best way to enforce cross-row constraints?

Tom Kyte - Mon, 2019-02-18 11:26
I use the database to declare (and enforce) as much application logic as I can. What I'd like to do is to enforce application constraints across related rows, if possible. As a contrived example, suppose we have a table of Agreements and a secon...
Categories: DBA Blogs

[Troubleshooting] Oracle Apps R12.2 Online Patching ADOP : Prepare Phase Issue

Online Apps DBA - Mon, 2019-02-18 08:00

[Troubleshooting] Oracle Apps R12.2 Online Patching ADOP : Prepare Phase Issue   Wants to know how to solve Online Patching (ADOP) issue? Visit: https://k21academy.com/appsdba23 & learn with follow the steps: ✔ Run Prepare Phase ✔ Look at ADOP Logs ✔ Errors in ADOP ✔ Root Causes Fix and Revise to troubleshoot the Error & Solve. Write […]

The post [Troubleshooting] Oracle Apps R12.2 Online Patching ADOP : Prepare Phase Issue appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Understanding grid disks in Exadata

Amardeep Sidhu - Mon, 2019-02-18 07:07

Use of Exadata storage cells seems to be a very poorly understood concept. A lot of people have confusions about how exactly ASM makes uses of disks from storage cells. Many folks assume there is some sort of RAID configured in the storage layer whereas there is nothing like that. I will try to explain some of the concepts in this post.

Let’s take an example of an Exadata quarter rack that has 2 db and 3 storage nodes (node means a server here). Few things to note:

  • The space for binaries installation on db nodes comes from the local disks installed in db nodes (600GB * 4 (expandable to 8) configured in RAID5). In case you are using OVM, same disks are used for keeping configuration files, Virtual disks for VMs etc.
  • All of the ASM space comes from storage cells. The minimum configuration is 3 storage cells.

So let’s try to understand what makes a storage cell. There are 12 disks in each storage cell (latest X7 cells are coming with 10 TB disks). As I mentioned above that there are 3 storage cells in a minimum configuraiton. So we have a total of 36 disks. There is no RAID configured in the storage layer. All the redundancy is handled at ASM level. So to create a disk group:

  • First of all cell disks are created on each storage cell. 1 physical disk makes 1 cell disk. So a quarter rack has 36 cell disks.
  • To divide the space in various disk groups (by default only two disk groups are created : DATA & RECO; you can choose how much space to give to each of them) grid disks are created. grid disk is a partition on the cell disk. slice of a disk in other words. Slice from each cell disk must be part of both the disk groups. We can’t have something like say DATA has 18 disks out of 36 and the RECO has another 18. That is not supported. Let’s say you decide to allocate 5 TB to DATA grid disks and 4 TB to RECO grid disks (out of 10 TB on each disk, approx 9 TB is what you get as usable). So you will divide each cell disk into 2 parts – 5 TB and 4 TB and you would have 36 slices of 5 TB each and 36 slices of 4 TB each.
  • DATA disk group will be created using the 36 5 TB slices where grid disks from each storage cell constitute one failgroup.
  • Similarly RECO disk group will be created using the 36 4 TB slices.

What we have discussed above is a quarter rack scenario with High Capacity (HC) disks. There can be somewhat different configurations too:

  • Instead of HC disks, you can have the Extreme Flash (EF) configuration which uses flash cards in place of disks. Everything remains the same except the number. Instead of 12 HC disks there will be 8 flash cards.
  • With X3 I think, Oracle introduced an eighth rack configuration. In an eighth rack configuration db nodes come with half the cores (of quarter rack db nodes) and storage cells come with 6 disks in each of the cell. So here you would have only 18 disks in total. Everything else works in the same way.

Hope it clarified some of the doubts about grid disks.


Categories: BI & Warehousing

[Video 2 of 5] 3 Ways to Connect to Oracle Cloud

Online Apps DBA - Mon, 2019-02-18 03:17

There are 3 ways to connect to the Oracle Cloud! Leave a comment below and share how many you know. Note: We’ve covered these 3 ways in our 2nd video part of Networking In Oracle Cloud here: https://k21academy.com/1z093214 There are 3 ways to connect to the Oracle Cloud! Leave a comment below and share how […]

The post [Video 2 of 5] 3 Ways to Connect to Oracle Cloud appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Batch Architecture - Designing Your Cluster - Part 1

Anthony Shorten - Sun, 2019-02-17 18:42

The Batch Architecture for the Oracle Utilities Application Framework is both flexible and powerful. To simplify the configuration and prevent common mistakes the Oracle Utilities Application Framework includes a capability called Batch Edit. This is a command line utility, named bedit.sh, that provides a wizard style capability to build and maintain your configuration. By default the capability is disabled and can be enabled by setting the Enable Batch Edit Functionality to true in the Advanced Configuration settings using the configureEnv.sh script:

$ configureEnv.sh -a ************************************************* * Environment Configuration demo * *************************************************   50. Advanced Environment Miscellaneous Configuration ...        Enable Batch Edit Functionality:                    true ...

Once enabled the capability can be used to build and maintain your batch architecture.

Using Batch Edit

The Batch Edit capability is an interactive utility to build the environment. The capability is easy to use with the following recommendations:

  • Flexible Options. When invoking the command you specify the object type you want to configure (cluster, threadpool or submitter) and any template you want to use. The command options will vary. Use the -h option for a full list.
  • In Built Help. If you do not know what a parameter is about or even the object type. You can use the help <topic> command. For example, using when configuring help threadpoolworker gives you advice about the approaches you can take for threadpools. If you want a list of topics, type help with no topic.
  • Simple Commands. The utility has a simple set of commands within the utility to interact with the settings. For example if you want to set the role within the cluster to say fred you would use the set role fred command within the utility.
  • Save the Configuration. There is a save command to make all changes in the session reflect in the relevant file and conversely if you make a mistake you can exit without saving the session.
  • Informative. It will tell you which file you are editing at the start of the session so you can be sure you are in the right location.

Here is an example of an edit session:

$ bedit.sh -w

Editing file /u01/ugtbk/splapp/standalone/config/threadpoolworker.properties using template /u01/ugtbk/etc/threadpoolworker.be
Includes the following push destinations:
  dir:/u01/ugtbk/etc/conf/tpw

Batch Configuration Editor 4.4.0.0.0_1 [threadpoolworker.properties]
--------------------------------------------------------------------

Current Settings

  minheap (1024m)
  maxheap (1024m)
  daemon (true)
  rmiport (7540)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_Base_TPW)
  jmxstartport (7540)
  l2 (READ_ONLY)
  devmode (false)
  ollogdir (/u02/sploutput/ugtbk)
  ollogretain ()
  thdlogretain ()
  timed_thdlog_dis (false)
  pool.1
      poolname (DEFAULT)
      threads (5)
  pool.2
      poolname (LOCAL)
      threads (0)
> save
Changes saved
Pushing file threadpoolworker.properties to /u01/ugtbk/etc/conf/tpw ...
> exit

Cluster Configuration

The first step in the process is to design your batch cluster. This is the group of servers that will execute batch processes. The Oracle Utilities Application Framework uses a Restricted Use License of Oracle Coherence to cluster batch processes and resources. The use of Oracle Coherence allows you to implement different architectures from simple to complex. Using Batch Edit there are three cluster types supported (you must choose one type per environment).

Cluster Type (template code) Use Cases Comments Single Server (ss) Cluster is restricted to a single host This is useful for non-production environments such as demonstration, development and testing as it is most simple to implement Uni-Cast (wka) The cluster uses unicast protocol with the hosts explicitly named within the cluster that are part of the cluster. This is recommended for sites wanting to lock down a cluster to specific hosts and does not want to use multi-cast protocols. Administrators will have to name the list of hosts, known as Well Known Addresses, that are part of the cluster as part of this configuration Multi-Cast (mc) The cluster uses the multi-cast protocol with a valid multi-cast IP address and port. This is recommended for sites who want a dynamic configuration where threadpools and submitters are accepted on demand. This is the lowest amount of configuration for product clusters as the threadpools can join a cluster from any server with the right configuration dynamically. It is not recommended for sites that do not use the multi-cast protocol. Single Server Configuration

This is the simplest configuration with the cluster restricted to a single host. The cluster configuration is restricted networking wise within the configuration. To use this cluster type simply use the following command and follow the configuration generated for you from the template.

bedit.sh -c -t ss Uni-Cast Configuration

This is a multi-host cluster where the hosts in the configuration are defined explicitly in host and port number combinations. The port number is used for communication to that host in the cluster. This style is useful where the site does not want to use the multi-cast protocol or wants to micro-manage their configuration. To use this cluster type simply use the following command and follow the configuration generated for you from the template.

bedit.sh -c -t wka

You then add each host as a socket using the command:

add socket

This will add a new socket collection in the format socket.<socketnumber>. To set the values use the command:

set socket.<socketnumber> <parameter> <value>

where:

<socketnumber> The host number to edit <parameter> Either wkaaddress (host or IP address of server) and wkaport (port number on that host to use) <value> the value for the parameter. For example: set socket.1 wkaaddress host1

To use this cluster style ensure the following:

  • Use the same port number per host. Try and use the same broadcast port on each host in the cluster. If they are different then the port number in the main file for the machines that are on the cluster has to be changed to define that port.
  • Ensure each host has a copy of the configuration file. When you build the configuration file, ensure the same file is on each of the servers in the cluster (each host will require a copy of the product).
Multi-Cast Configuration

This is the most common multi-host configuration. The idea with this cluster type is that a multi-cast port and IP Address are broadcast across your network per cluster. It requires very little configuration and the threadpools can dynamically connect to that cluster with little configuration. It uses the multi-cast protocol which network administrators either love or hate. The configuration is similar to the Single Server but the cluster settings are actually managed in the installation configuration (ENVIRON.INI) using the COHERENCE_CLUSTER_ADDRESS and COHERENCE_CLUSTER_PORT settings. Refer to the Server Administrator Guide for additional configuration advice.

Cluster Guidelines

When setting up the cluster there are a few guidelines to follow:

  • Use Single Server for Non-Production. Unless you need multi-host clusters, use the Single Server cluster to save configuration effort.
  • Name Your Cluster Uniquely. Ensure your cluster is named appropriately and uniquely per environment to prevent cross environment unintentional clustering.
  • Set a Cluster Type and Stick with it. It is possible to migrate from one cluster type to another (without changing other objects) but to save time it is better to lock in one type and stick with it for the environment.
  • Avoid using Prod Mode. There is a mode in the configuration which is set to dev by default. It is recommended to leave the default for ALL non-production environment to avoid cross cluster issues. The Prod mode is recommended for Production systems only. Note: There are further safeguards built into the Oracle Utilities Application Framework to prevent cross cluster connectivity.

The cluster configuration generates a tangosol-coherence-override.xml configuration file used by Oracle Coherence to manage the cluster.

Cluster Configuration

Now we have the cluster configured, the next step is to design your threadpools to be housed in the cluster. That will be discussed in Part 2 (coming soon).

Microsoft Azure: account upgrade to keep going with free products

Dietrich Schroff - Sun, 2019-02-17 12:39
Today i got an e-mail about my azure account:


 Microsoft provides a little FAQ for this upgrading process:
So there is no option not to upgrade. "you won't be able to access any Azure services" is not really an option...

There 4 upgrade paths i can choose:
To use this account only for my blog, i decided to choose "no technical support", which is perfectly adequate.

After hitting the upgrade button, the notifications bar came up with the following message:
And the subscriptions tab inside the Azure portal shows:



GoldenGate XAG APP VIP Revisited

Michael Dinh - Sat, 2019-02-16 08:17

For unknown reasons, XAG integration for GoldenGate target was eradicated without any trace (I was not able to find any).

When running crsctl at target, no resources were available.

crsctl stat res -t -w 'TYPE = xag.goldengate.type'
crsctl stat res -t|egrep -A2 'dbfs|xag'

Here is an example from source:

$ crsctl stat res -t -w 'TYPE = xag.goldengate.type'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
xag.gg_ue.goldengate
      1        ONLINE  ONLINE       host_source02           STABLE
--------------------------------------------------------------------------------

$ crsctl stat res -t|egrep -A2 'dbfs|xag'
dbfs_mount
               ONLINE  ONLINE       host_source01            STABLE
               ONLINE  ONLINE       host_source02            STABLE
--
ora.dbfs.db
      1        ONLINE  ONLINE       host_source01            Open,STABLE
      2        ONLINE  ONLINE       host_source02            Open,STABLE
--
xag.gg_ue-vip.vip
      1        ONLINE  ONLINE       host_source01            STABLE
xag.gg_ue.goldengate
      1        ONLINE  ONLINE       host_source02            STABLE

Now, I need to setup XAG for target RAC Cluster.

FYI: XAG Bundled Agent was not downloaded, instead used the one available from GRID_HOME.

$ agctl query releaseversion
The Oracle Grid Infrastructure Agents release version is 3.1.0

$ agctl query deployment
The Oracle Grid Infrastructure Agents deployment is bundled

Creating XAG using 2 commands provided different metadata vs 1 command.

The difference between FILESYSTEMS is expected due to change from DBFS to ACFS.

Currently, the change is being implemented at target.

Here is an example using 2 commands:

As Root:
appvipcfg create -network=1 -ip=10.30.91.158 -vipname=xag.gg_ue-vip.vip -user=ggsuser -group=oinstall

As Oracle:
agctl add goldengate gg_ue \
--gg_home /u03/gg/12.2.0 \
--instance_type target \
--nodes target04,target02 \
--vip_name xag.gg_target-vip.vip \
--filesystems ora.acfs_data.acfs_vol.acfs \
--oracle_home /u01/app/oracle/product/12.1.0/client_2

Create source_xag_goldengate.txt target_xag_goldengate.txt using:
crsctl stat res -w "TYPE = xag.goldengate.type" -p
$ diff source_xag_goldengate.txt target_xag_goldengate.txt
< ACL=owner:ggsuser:rwx,pgrp:dba:r-x,other::r--
> ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--
---
< AUTO_START=restore
> AUTO_START=never
---
< FILESYSTEMS=dbfs_mount
< GG_HOME=/u03/app/gg/12.2.0
---
> FILESYSTEMS=ora.acfs_data.acfs_vol.acfs
> GG_HOME=/u03/gg/12.2.0
---
< HOSTING_MEMBERS=source01 source02
> HOSTING_MEMBERS=target01 target02
---
< ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
> ORACLE_HOME=/u01/app/oracle/product/12.1.0/client_2
---
< START_DEPENDENCIES=hard(xag.gg_target-vip.vip,dbfs_mount) pullup(xag.gg_target-vip.vip,dbfs_mount)
> START_DEPENDENCIES=
---
< STOP_DEPENDENCIES=hard(xag.gg_target-vip.vip,intermediate:dbfs_mount)
> STOP_DEPENDENCIES=
---
< VIP_CREATED=1
> VIP_CREATED=0

Here is an example using 1 command:

As Root:
agctl add goldengate gg_target
--gg_home /u03/gg/12.2.0
--instance_type target
--nodes target01,target02
-- filesystems ora.acfs_data.acfs_vol.acfs
--oracle_home /u01/app/oracle/product/12.1.0/client_2
--network 1 --ip 10.30.91.158
--user ggsuser
--group dba

$ diff source_xag_goldengate.txt target_xag_goldengate2.txt
< ACL=owner:ggsuser:rwx,pgrp:dba:r-x,other::r--
> ACL=owner:ggsuser:rwx,pgrp:dba:r-x,other::r--
---
< FILESYSTEMS=dbfs_mount
< GG_HOME=/u03/app/gg/12.2.0
---
> FILESYSTEMS=ora.acfs_data.acfs_vol.acfs
> GG_HOME=/u03/gg/12.2.0
---
< HOSTING_MEMBERS=source01 source02
> HOSTING_MEMBERS=target01 target02
---
< ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
> ORACLE_HOME=/u01/app/oracle/product/12.1.0/client_2
---
< START_DEPENDENCIES=hard(xag.gg_target-vip.vip,dbfs_mount) pullup(xag.gg_target-vip.vip,dbfs_mount)
> START_DEPENDENCIES=hard(xag.gg_target-vip.vip,ora.acfs_data.acfs_vol.acfs) pullup(xag.gg_target-vip.vip,ora.acfs_data.acfs_vol.acfs)
---
< STOP_DEPENDENCIES=hard(xag.gg_target-vip.vip,intermediate:dbfs_mount)
> STOP_DEPENDENCIES=hard(xag.gg_target-vip.vip,intermediate:ora.acfs_data.acfs_vol.acfs)

In conclusion, I will be creating XAG using 1 command from now on to provide more metadata info.

PostgreSQL : barman rsync method vs streaming method

Yann Neuhaus - Sat, 2019-02-16 06:16

Barman is a tool to perform backup and recovery for PostgreSQL databases. It can do backup using two methods:
-rsync/ssh
-streaming
In this blog I am going to explain how to use these two methods to backup a PostgreSQL database. Generally it is a good practice to dedicate a server for barman instead of installing it on the database server. My environment is described below
postgreSQL server: dbi-pg-essentials 192.168.22.101 Centos 7
barman server: pgservertools 192.168.22.104 Oracle Linux 7
postgreSQL version: 11.1
barman version: 2.6
The first step is to install barman on the barman server pgservertools

[root@pgservertools network-scripts]# yum install  barman.noarch barman-cli.noarch
Loaded plugins: langpacks, ulninfo
Resolving Dependencies
--> Running transaction check
---> Package barman.noarch 0:2.6-1.rhel7 will be installed
---> Package barman-cli.noarch 0:1.3-1.rhel7.1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package        Arch       Version             Repository                  Size
================================================================================
Installing:
 barman         noarch     2.6-1.rhel7         pgdg10-updates-testing     300 k
 barman-cli     noarch     1.3-1.rhel7.1       pgdg10-updates-testing      14 k

Transaction Summary
================================================================================
...
...
Installed:
  barman.noarch 0:2.6-1.rhel7         barman-cli.noarch 0:1.3-1.rhel7.1

Complete!
[root@pgservertools network-scripts]#

The installation will create a linux user named barman.
As the rsync method need connections without passwords between two servers for the barman user, we have to configure ssh keys
On the server pgservertools (barman server) let’s create keys with the user barman and then copy the public key to the database server dbi-pg-essentials for the user postgres

-bash-4.2$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/var/lib/barman/.ssh/id_rsa):
Created directory '/var/lib/barman/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /var/lib/barman/.ssh/id_rsa.
Your public key has been saved in /var/lib/barman/.ssh/id_rsa.pub.
The key fingerprint is:
f4:b7:6b:6e:38:25:ae:be:7f:9a:34:03:a1:1c:a0:ac barman@pgservertools
The key's randomart image is:
+--[ RSA 2048]----+
|    .            |
| . . .           |
|  o   . o        |
| .   . + o       |
|E     o S . .    |
|         .....   |
|         .++.    |
|         .+++.   |
|       .+++Bo    |
+-----------------+
-bash-4.2$ 


-bash-4.2$ cd .ssh/
-bash-4.2$ ls
id_rsa  id_rsa.pub


-bash-4.2$ ssh-copy-id postgres@dbi-pg-essentials
The authenticity of host 'dbi-pg-essentials (192.168.22.101)' can't be established.
ECDSA key fingerprint is 33:65:38:f4:eb:5b:f4:10:d3:36:7b:ea:5a:70:33:18.
Are you sure you want to continue connecting (yes/no)? yes
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
postgres@dbi-pg-essentials's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'postgres@dbi-pg-essentials'"
and check to make sure that only the key(s) you wanted were added.

If everything is fine, barman should be able to connect to database server as postgres linux user without password

-bash-4.2$ hostname
pgservertools.localdomain
-bash-4.2$ id
uid=994(barman) gid=992(barman) groups=992(barman) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
-bash-4.2$ ssh postgres@dbi-pg-essentials hostname
dbi-pg-essentials
-bash-4.2$

On the database server I also have installed the package barman-cli.noarch which will allow us to use the command barman-wal-archive. We will talk about this later.

[root@dbi-pg-essentials ~]# yum install barman-cli.noarch
Loaded plugins: fastestmirror
dbipgessentials                                          | 3.6 kB     00:00
edb-repos                                                | 2.4 kB     00:00
…
…
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : barman-cli-1.3-1.rhel7.1.noarch                              1/1
  Verifying  : barman-cli-1.3-1.rhel7.1.noarch                              1/1

Installed:
  barman-cli.noarch 0:1.3-1.rhel7.1

Complete!
[root@dbi-pg-essentials ~]# 
rsync backup

As specified earlier in this case the backup is done using rsync. But we have many ways to ship WAL to the barman server. So before talking about barman configuration let’s take a moment to see the WAL shipping
1- With WAL archiving
To better understand I put this picture I got from the barman documentation . As we see backup is done via rsync and the WAL are sent via the barman-wal-archive. This utility comes with barman 2.6.
Before barman 2.6 the rsync command was used to send WAL to barman.
In the documentation we can find that using barman-wal-archive instead of rsync/SSH reduces the risk of data corruption of the shipped WAL file on the Barman server.
rsync_1
The use of barman_wal_archive is done via the postgresql.conf file on the database server. It appears on the value of the parameter archive_command. Below values in my postgresql.conf file.

[postgres@dbi-pg-essentials PG1]$ grep -E  "archive_mode|wal_level|archive_command" postgresql.conf
wal_level = replica                    # minimal, replica, or logical
archive_mode = on               # enables archiving; off, on, or always
archive_command = 'barman-wal-archive 192.168.22.104 pgserver11 %p'             # command to use to archive a logfile segment

Before barman 2.6 we would use following for archive_command to send WAL to barman server

archive_command = 'rsync -a  %p  barman@pgservertools:/var/lib/barman/pgserver11/incoming/%f' # command to use to archive a logfile segment
[postgres@dbi-pg-essentials PG1]$

2- With WAL archiving and WAL streaming
This picture from barman documentation will help to better understand
rsync_2
To use WAL streaming to the barman server, we need pg_receivewal (pg_receivexlog up to PostgreSQL 10) to be installed on the barman server. Be careful of the version of pg_receivewal. In my case I installed the version 11.1 as my PostgreSQL is 11.1

[postgres@pgservertools bin]$ /usr/pgsql-11/bin/pg_receivewal -V
pg_receivewal (PostgreSQL) 11.1
[postgres@pgservertools bin]$

A streaming connection also should be configured and the parameter streaming_archiver should be set to on.
Now to resume let’s say that I want to configure barman with
-rsync method
-using barman_wal_archive and WAL streaming
The barman file configuration /etc/barman.conf should be like

[postgres@pgservertools bin]$ cat /etc/barman.conf | grep -v  ^\;

[barman]
barman_user = barman
configuration_files_directory = /etc/barman.d
barman_home = /var/lib/barman
log_file = /var/log/barman/barman.log
log_level = INFO
compression = gzip
retention_policy = REDUNDANCY 2

[pgserver11]
description = "Main DB Server"
ssh_command = ssh postgres@dbi-pg-essentials
streaming_conninfo=host=192.168.22.101 user=postgres
conninfo=host=192.168.22.101 user=postgres
backup_method = rsync
streaming_archiver = on
archiver = on
path_prefix=/usr/pgsql-11/bin/
[postgres@pgservertools bin]$

And the postgresql.conf should contain following entries

[postgres@dbi-pg-essentials PG1]$ grep -E  "archive_mode|wal_level|archive_command" postgresql.conf
wal_level = replica                    # minimal, replica, or logical
archive_mode = on               # enables archiving; off, on, or always
archive_command = 'barman-wal-archive 192.168.22.104 pgserver11 %p'             # command to use to archive a logfile segment

The first thing is to test that the barman configuration is fine for the PostgreSQL database. The check command should not return any errors. On the barman server with the user barman

-bash-4.2$ barman check pgserver11
Server pgserver11:
        PostgreSQL: OK
        is_superuser: OK
        PostgreSQL streaming: OK
        wal_level: OK
        directories: OK
        retention policy settings: OK
        backup maximum age: OK (no last_backup_maximum_age provided)
        compression settings: OK
        failed backups: OK (there are 0 failed backups)
        minimum redundancy requirements: OK (have 0 backups, expected at least 0)
        ssh: OK (PostgreSQL server)
        not in recovery: OK
        archive_mode: OK
        archive_command: OK
        continuous archiving: OK
        pg_receivexlog: OK
        pg_receivexlog compatible: OK
        receive-wal running: OK
        archiver errors: OK
-bash-4.2$

Now we can launch a backup using the backup command on the barman server with the user barman

-bash-4.2$ barman backup pgserver11
Starting backup using rsync-exclusive method for server pgserver11 in /var/lib/barman/pgserver11/base/20190215T153350
Backup start at LSN: 0/2E000060 (00000005000000000000002E, 00000060)
This is the first backup for server pgserver11
Starting backup copy via rsync/SSH for 20190215T153350
Copy done (time: 12 seconds)
This is the first backup for server pgserver11
WAL segments preceding the current backup have been found:
        00000005000000000000002D from server pgserver11 has been removed
Asking PostgreSQL server to finalize the backup.
Backup size: 74.1 MiB
Backup end at LSN: 0/2E000168 (00000005000000000000002E, 00000168)
Backup completed (start time: 2019-02-15 15:33:52.392144, elapsed time: 15 seconds)
Processing xlog segments from file archival for pgserver11
        00000005000000000000002E
        00000005000000000000002E.00000060.backup
-bash-4.2$

We can list the existing backup. On the barman server with the barman user

-bash-4.2$ barman list-backup pgserver11
pgserver11 20190215T153350 - Fri Feb 15 15:34:08 2019 - Size: 74.1 MiB - WAL Size: 0 B
-bash-4.2$
Streaming backup

Since the version 2.0, barman supports streaming replication for backup. This method uses the native pg_basebackup
1- Streaming-only backup
This picture is from the barman documentation may help
stream_1
As we can see, In this case backup are done via streaming. WAL are also using streaming protocol.
2- WAL archiving and WAL streaming
Once again following picture may help
stream_2
In this case we configure standard archiving as well to implement a more robust architecture

For example to implement a barman configuration with streaming backup WAL streaming and WAL archiving, the /etc/barman.conf should be like

[postgres@pgservertools bin]$ cat /etc/barman.conf | grep -v  ^\;

[barman]
barman_user = barman
configuration_files_directory = /etc/barman.d
barman_home = /var/lib/barman
log_file = /var/log/barman/barman.log
log_level = INFO
compression = gzip
retention_policy = REDUNDANCY 2

[pgserver11]
description = "Main DB Server"
ssh_command = ssh postgres@dbi-pg-essentials
streaming_conninfo=host=192.168.22.101 user=postgres
conninfo=host=192.168.22.101 user=postgres
backup_method = postgres
streaming_archiver = on
archiver = on
slot_name=barman
path_prefix=/usr/pgsql-11/bin/

and the postgressql.conf

[postgres@dbi-pg-essentials PG1]$ grep -E  "archive_mode|wal_level|archive_command" postgresql.conf
wal_level = replica                    # minimal, replica, or logical
archive_mode = on               # enables archiving; off, on, or always
archive_command = 'barman-wal-archive 192.168.22.104 pgserver11 %p'             # command to use to archive a logfile segment

So the check should not return any errors

-bash-4.2$ barman check pgserver11
Server pgserver11:
        PostgreSQL: OK
        is_superuser: OK
        PostgreSQL streaming: OK
        wal_level: OK
        replication slot: OK
        directories: OK
        retention policy settings: OK
        backup maximum age: OK (no last_backup_maximum_age provided)
        compression settings: OK
        failed backups: OK (there are 0 failed backups)
        minimum redundancy requirements: OK (have 1 backups, expected at least 0)
        pg_basebackup: OK
        pg_basebackup compatible: OK
        pg_basebackup supports tablespaces mapping: OK
        archive_mode: OK
        archive_command: OK
        continuous archiving: OK
        pg_receivexlog: OK
        pg_receivexlog compatible: OK
        receive-wal running: OK
        archiver errors: OK
-bash-4.2$

And we launch a backup, we can see that barman is using pg_basebackup

-bash-4.2$ barman backup pgserver11
Starting backup using postgres method for server pgserver11 in /var/lib/barman/pgserver11/base/20190215T160757
Backup start at LSN: 0/2F0001A8 (00000005000000000000002F, 000001A8)
Starting backup copy via pg_basebackup for 20190215T160757
Copy done (time: 11 seconds)
Finalising the backup.
Backup size: 73.9 MiB
Backup end at LSN: 0/31000060 (000000050000000000000031, 00000060)
Backup completed (start time: 2019-02-15 16:07:57.919595, elapsed time: 11 seconds)
Processing xlog segments from file archival for pgserver11
        000000050000000000000030
        000000050000000000000031
Processing xlog segments from streaming for pgserver11
        000000050000000000000030
-bash-4.2$

Available backups are now

-bash-4.2$ barman list-backup pgserver11
pgserver11 20190215T160757 - Fri Feb 15 16:08:09 2019 - Size: 73.9 MiB - WAL Size: 0 B
pgserver11 20190215T153350 - Fri Feb 15 15:34:08 2019 - Size: 74.1 MiB - WAL Size: 48.3 KiB

To restore with barman, we use the command recover. For example the following command will restore the backup 20190215T160757 on server dbi-pg-essentials_3 in the directory /u02/pgdata/PGRESTORE

-bash-4.2$ barman recover --remote-ssh-command "ssh postgres@dbi-pg-essentials_3" pgserver11 20190215T160757 /u02/pgdata/PGRESTORE
Conclusion

In this blog I have tried to explain different scenarios for using barman. We talked about rsync method and streaming methods for backups. Before starting the setup a choice must be done. One can check documentation for more information

Ref: http://docs.pgbarman.org/

Cet article PostgreSQL : barman rsync method vs streaming method est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator