Feed aggregator

Global non partitioned index on table partitions

Tom Kyte - Fri, 2019-11-01 18:47
Hi, I have recently got some sql statements that is not performing well. <code>select * from v where a=? and b not in(,,,,....) and c =? and rownum<-100 </code> where v is a view. Original sql statement is similar to above statement. From explai...
Categories: DBA Blogs

Event Based Job is not working

Tom Kyte - Fri, 2019-11-01 18:47
Hi, I am struggling several days with following issue. I am trying to implement event based job. At start all was working fine. But after several payload type modifications + several times recreated queue + recreated scheduled job ... schedu...
Categories: DBA Blogs

Timestamp Functions and Presentation Variables in Oracle Cloud Analytics

Rittman Mead Consulting - Fri, 2019-11-01 04:12

One of the most popular Rittman Mead blog posts over the last 10 years is Timestamps and Presentation Variables. As we are seeing more and more migrations to OAC, we decided to review and revise this post for the latest version of Oracle Cloud Analytics (OAC), 105.4.0-140 as of October 2019. Read more about the latest updates here.

--

One could say that creating a chart is not the most complex task in the world of Business Intelligence but we would argue that creating a meaningful report that perfectly illustrates the message hidden in data and therefore adds value to the management is nowhere close to being easy!    A good way to make a report as informative as possible is to use trends and comparison. And to do so, a perfect tool would be the time analysis functions. For example comparing sales in a period of time this year to the same period of time the year before. Or measure the similarity or dissimilarity of sales in different months of the year.

Demo Platform

I have used a free trial instance of OAC for this demo. If you haven’t done yet, sign up for a free 30-day trial Oracle Cloud account (different to an Oracle account). Use the account to access the Oracle Cloud Infrastructure (OCI) console which is the latest Oracle movement towards having one integrated cloud platform to manage all your Oracle cloud applications, platforms, and infrastructure in one place.
From the OCI console it is 5 to 10 minutes before your free trial instance of OAC is up and running. For the detailed step by step of creating a new instance read here.

Demo Goals

In this blog post I intend to show you how to combine the power of timestamp functions and presentation variables to create robust, repeatable reports. We will create a report that displays a year over year analysis for any rolling number of periods, by week or month, from any date in time, all determined by the user. This entire demo will only use values from a date and a revenue field.


TIMESTAMP Functions

TIMESTAMPADD() manipulates data of the data types DATE and DATETIME based on a calendar year.

Syntax: TIMESTAMPADD(interval, expr, timestamp)
Example: TIMESTAMPADD(SQL_TSI_MONTH, 12,Time."Order Date")
Description: Adds a specified number of intervals to a timestamp, and returns a single timestamp.
Timestamp Interval (TSI) Options: SQL_TSI_SECOND, SQL_TSI_MINUTE, SQL_TSI_HOUR, SQL_TSI_DAY, SQL_TSI_WEEK, SQL_TSI_MONTH, SQL_TSI_QUARTER, SQL_TSI_YEAR

Read more about other calendar functions.

Building Filters

Starting to build our demo, the filter below returns all dates greater than or equal to 7 days ago including the current date.

In other words we have now a functional filter to select all the rows where Date >= a week ago.

As a good practice, always include a second filter giving an upper limit to the time filter. For example "Periods"."Day Date" < CURRENT_DATE would confirm that there won’t be any records that you don’t want in the mix and therefore no unnecessary strain on the system.

Let’s go one step further, instead of going 7 days back, we could try and include all the previous days in the current month or in other words dates >= the first day of the month. In this scenario, we can use the DAYOFMONTH() function to get the calendar day of any date. From here it will be easy to calculate the number of days in the month so far. Our new filter would look like this:

For example, if today is October 16th, DAYOFMONTH(CURRENT_DATE) would equal 16. Thus, we would subtract 16 days from CURRENT_DATE to go back to September 30th, and adding one will give us October 1st.

Presentation Variables

A presentation variable is a variable that can be created from the front end, the Analytics as part of one of the following types of dashboard prompts:

  • Column prompt, Associated with a column and the values that it can take come from the column values. For information on working with column prompts, see Creating a Column Prompt.
  • Variable prompt, Not associated with any column, and you define the values that it can take. For information on working with variable prompts, see Creating a Variable Prompt.

Each time a user selects a value in the column or variable prompt, the value of the presentation variable is set to the value that the user selects and will then be sent to any references of that filter throughout the dashboard page. This could be filters, formulas and even text boxes.

The first presentation variable we could introduce is to replace the CURRENT_DATE with a prompted value. Let’s call this presentation variable pv_Date,

  • Use the syntax @{pv_Date} to call this variable in the reports.
  • For variables of type string, surround the name in single quotes: ‘@{pv_String]’
  • It is good practice to assign a default value to the presentation variables so that you can work with your report before publishing it to a dashboard. For example the default value for the pv_Date is CURRENT_DATE so the new syntax would be @{pv_Date}{CURRENT_DATE}

Demo Time!

Our updated filter after replacing the CURRENT_DATE looks like below. Will will refer to this filter later as Filter 1 (F1).

The filter is starting to take shape. Now let's say we are going to always be looking at a date range of six months before the selected date. All we would need to do is create a nested TIMESTAMP function. To do this, we will “wrap” our current TIMESTAMP with another that will subtract six months:

Now we have a filter to select dates that are greater than or equal to the first day of the month of any given date and all the six months prior to that.


To take this one step further, we can create another presentation variable called  pv_n to allow the users to determine the amount of months to include in this analysis from a dashboard prompt.


Here is the updated version of our filter using the number of periods presentation variable and a default value of 6, @{pv_n}{6}. We will refer to the following filter as Filter 2 (F2).

Our TIMESTAMPADD function is now fairly robust and will give us any date greater than or equal to the first day of the month from n months ago from any given date. Now we will see what we just created in action by creating date ranges to allow for a Year over Year analysis for any number of months. Consider the following filter set:

This may appear to be pretty intimidating at first but if we break it into parts we can start to understand its purpose. Notice we are using the exact same filters from before; Filter 1 and Filter 2.   What we have done here is filtered on two time periods, separated by the OR statement.

  • The first date range defines the period as being the most recent completed n months from any given prompted date value, using a presentation variable with a default of today. Dates in the current month have been removed from the set by Filter 1.
  • The second time period, after the OR statement, is the exact same as the first only it has been wrapped in another TIMESTAMP function subtracting a year, giving you the exact same time frame for the year prior.

This allows us to create a report that can run a year over year analysis for a rolling n month time frame determined by the user.

A note on nested TIMESTAMPS: you will always want to create nested TIMESTAMPS with the smallest interval first. Then you will wrap intervals as necessary. In this case our smallest increment is day, wrapped by month, wrapped by year.


Let’s Go Crazy

A more advanced trick, If you use real time or near real time reporting: using CURRENT_DATE may be how you want to proceed. Otherwise, instead of using today as your default date value, use yesterday’s date since most data are only as current as yesterday.  Using yesterday will be valuable especially when pulling reports on the first day of the month or year - you generally want the entire previous time period rather than the empty beginning of a new one.  So, to implement, wherever you have @{pDate}{CURRENT_DATE} replace it with @{pDate}{TIMESTAMPADD(SQL_TSI_DAY,-1,CURRENT_DATE)}

One more change on our filter to make it extra-flexible here is to use a new presentation variable to determine if you want to display year over year values, or by month, or by week. This can be done by inserting a variable into your SQL_TSI_MONTH and DAYOFMONTH statements; changing MONTH to SQL_TSI_@{pv_INT}{MONTH} and DAYOF@{pv_INT}{MONTH}, where pv_INT is the name of our variable.

Start by creating a dummy variable in your prompt to allow users to select either MONTH or WEEK.  You can try something like this: CASE MOD(DAY("Time"."Date"),2) WHEN 0 'WEEK' WHEN 1 THEN 'MONTH' END

The updated filter now look like this:

In order for our interaction between Month and Week to run smoothly we have to factor in one last consideration: if we are to take the date December 1st, 2019 and subtract one year we get December 1st, 2018.  However, if we take the first day of this week, Sunday December 15, 2019 and subtract one year we get Saturday December 15, 2014.  In our analysis this will cause an extra partial week to show up for prior years.  To get around this we will add a case statement determining if '@{pv_INT}{MONTH}' = 'Week' THEN subtract 52 weeks from the first of the week ELSE subtract 1 year from the first of the month. With this, our final filter set will look like this:


With the use of these filters and some creative dashboarding, you can construct a report that easily allows you to view a year over year analysis from any date in time for any number of periods either by month or by week.

Filtered by Week intervals;

The formula below will give you the value of period rolling to use in the analysis;

In this post, we created a cloud version of the amazing demo previously described by Brian Hall.  As demonstrated, Timestamp functions and their power have been within the interesting topics of the visualisation and reporting for as long as we at Rittman Mead remember and can still be used in the realm of the Oracle Cloud Services in a very similar way as the past.

Feel free to get in touch, let us know your reviews and comments.

Categories: BI & Warehousing

pg_auto_failover: Setup and installation

Yann Neuhaus - Fri, 2019-11-01 02:25

When I attended PGIBZ 2019 earlier this year, I talked with Dimitri about pg_auto_failover and I promised to have a look at it. Well, almost half a year later and after we’ve met again at pgconf.eu it is time to actually do that. You probably already know that citudata was acquired by Microsoft earlier this year and that Microsoft seems to be committed to open source since a few years. pg_auto_failover is one of the projects they contribute back to the PostgreSQL community. This will be a multi-blog series and in this very first post it is all about getting it up and running. In a following post we will then look at failover and switchover scenarios.

As usual, when you need auto failover you need at least three nodes and pg_auto_failover is no exception to that. The following graphic is stolen from the pg_auto_failover github page:

We have one PostgreSQL master, one PostgreSQL replica and in addition a monitoring host. In may case that maps to:

pg-af1.ti.dbi-services.com master 192.168.22.70 pg-af2.ti.dbi-services.com replica 192.168.22.71 pg-af3.ti.dbi-services.com monitor/cluster management 192.168.22.72

All of these nodes run CentOS 8 and I will be going from source code as that gives most flexibility. As pg_auto_failover depends on PostgreSQL (of course) the first step is to install PostgreSQL on all three nodes (PostgreSQL 12 in this setup). If you need further information on how to do that you can e.g. check here. Basically these steps have been executed on all the three nodes (given that the postgres user already exists and sudo is configured):

[postgres@pg-af1 ~]$ sudo dnf install -y gcc openldap-devel python36-devel readline-devel redhat-lsb bison flex perl-ExtUtils-Embed zlib-devel openssl-devel pam-devel libxml2-devel libxslt-devel openssh-clients bzip2 net-tools wget unzip sysstat xorg-x11-xauth systemd-devel bash-completion python36 policycoreutils-python-utils make git
[postgres@pg-af1 ~]$ wget https://ftp.postgresql.org/pub/source/v12.0/postgresql-12.0.tar.bz2
[postgres@pg-af1 ~]$ tar -axf postgresql-12.0.tar.bz2
[postgres@pg-af1 ~]$ cd postgresql-12.0
[postgres@pg-af1 postgresql-12.0]$ sudo mkdir -p /u01 /u02
[postgres@pg-af1 postgresql-12.0]$ sudo chown postgres:postgres /u01 /u02
[postgres@pg-af1 postgresql-12.0]$ PGHOME=/u01/app/postgres/product/12/db_0/
[postgres@pg-af1 postgresql-12.0]$ SEGSIZE=2
[postgres@pg-af1 postgresql-12.0]$ BLOCKSIZE=8
[postgres@pg-af1 postgresql-12.0]$ WALSEGSIZE=64
[postgres@pg-af1 postgresql-12.0]$ ./configure --prefix=${PGHOME} \
> --exec-prefix=${PGHOME} \
> --bindir=${PGHOME}/bin \
> --libdir=${PGHOME}/lib \
> --sysconfdir=${PGHOME}/etc \
> --includedir=${PGHOME}/include \
> --datarootdir=${PGHOME}/share \
> --datadir=${PGHOME}/share \
> --with-pgport=5432 \
> --with-perl \
> --with-python \
> --with-openssl \
> --with-pam \
> --with-ldap \
> --with-libxml \
> --with-libxslt \
> --with-segsize=${SEGSIZE} \
> --with-blocksize=${BLOCKSIZE} \
> --with-systemd \
> --with-extra-version=" dbi services build"
[postgres@pg-af1 postgresql-12.0]$ make all
[postgres@pg-af1 postgresql-12.0]$ make install
[postgres@pg-af1 postgresql-12.0]$ cd contrib
[postgres@pg-af1 contrib]$ make install
[postgres@pg-af1 contrib]$ cd ../..
[postgres@pg-af1 ~]$ rm -rf postgresql*

We will go for an installation from source code of pg_auto_failover as well (again, on all three nodes):

postgres@pg-af1:/home/postgres/ [pg120] git clone https://github.com/citusdata/pg_auto_failover.git
postgres@pg-af1:/home/postgres/ [pg120] cd pg_auto_failover/
postgres@pg-af1:/home/postgres/pg_auto_failover/ [pg120] make
postgres@pg-af1:/home/postgres/pg_auto_failover/ [pg120] make install
postgres@pg-af1:/home/postgres/pg_auto_failover/ [pg120] cd ..
postgres@pg-af1:/home/postgres/ [pg120] rm -rf pg_auto_failover/

That’s it, quite easy. What I like especially is, that there are no dependencies on python or any other libraries except for PostgreSQL. What the installation gives us is basically pg_autoctl:

postgres@pg-af1:/home/postgres/ [pg120] pg_autoctl --help
pg_autoctl: pg_auto_failover control tools and service
usage: pg_autoctl [ --verbose --quiet ]


Available commands:
pg_autoctl
+ create   Create a pg_auto_failover node, or formation
+ drop     Drop a pg_auto_failover node, or formation
+ config   Manages the pg_autoctl configuration
+ show     Show pg_auto_failover information
+ enable   Enable a feature on a formation
+ disable  Disable a feature on a formation
run      Run the pg_autoctl service (monitor or keeper)
stop     signal the pg_autoctl service for it to stop
reload   signal the pg_autoctl for it to reload its configuration
help     print help message
version  print pg_autoctl version

The first step in setting up the cluster is to initialize the monitoring node:

postgres@pg-af3:/home/postgres/ [pg120] pg_autoctl create --help
pg_autoctl create: Create a pg_auto_failover node, or formation

Available commands:
pg_autoctl create
monitor    Initialize a pg_auto_failover monitor node
postgres   Initialize a pg_auto_failover standalone postgres node
formation  Create a new formation on the pg_auto_failover monitor

postgres@pg-af3:/home/postgres/ [pg120] sudo mkdir -p /u02/pgdata
postgres@pg-af3:/home/postgres/ [pg120] sudo chown postgres:postgres /u02/pgdata
postgres@pg-af3:/home/postgres/ [pg120] unset PGDATABASE
postgres@pg-af3:/home/postgres/ [] pg_autoctl create monitor --pgdata /u02/pgdata/PG12/af
INFO  Initialising a PostgreSQL cluster at "/u02/pgdata/PG12/af"
INFO   /u01/app/postgres/product/12/db_0/bin/pg_ctl --pgdata /u02/pgdata/PG12/af --options "-p 5432" --options "-h *" --waitstart
INFO  Granting connection privileges on 192.168.22.0/24
INFO  Your pg_auto_failover monitor instance is now ready on port 5432.
INFO  pg_auto_failover monitor is ready at postgres://autoctl_node@pg-af3:5432/pg_auto_failover
INFO  Monitor has been succesfully initialized.

Once that succeeds you’ll a new PostgreSQL instance running and pg_auto_failover PostgreSQL background worker processes:

postgres@pg-af3:/home/postgres/ [af] ps -ef | grep "postgres:"
postgres  5958  5955  0 14:15 ?        00:00:00 postgres: checkpointer
postgres  5959  5955  0 14:15 ?        00:00:00 postgres: background writer
postgres  5960  5955  0 14:15 ?        00:00:00 postgres: walwriter
postgres  5961  5955  0 14:15 ?        00:00:00 postgres: autovacuum launcher
postgres  5962  5955  0 14:15 ?        00:00:00 postgres: stats collector
postgres  5963  5955  0 14:15 ?        00:00:00 postgres: pg_auto_failover monitor
postgres  5964  5955  0 14:15 ?        00:00:00 postgres: logical replication launcher
postgres  5965  5955  0 14:15 ?        00:00:00 postgres: pg_auto_failover monitor worker
postgres  5966  5955  0 14:15 ?        00:00:00 postgres: pg_auto_failover monitor worker

The initialization of the monitor node also created a new database and two roles:

postgres@pg-af3:/home/postgres/ [af] psql postgres
psql (12.0 dbi services build)
Type "help" for help.

postgres=# \l
List of databases
Name       |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges
------------------+----------+----------+-------------+-------------+-----------------------
pg_auto_failover | autoctl  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
postgres         | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
template0        | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
                 |          |          |             |             | postgres=CTc/postgres
template1        | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
                 |          |          |             |             | postgres=CTc/postgres

postgres=# \du
List of roles
Role name   |                         Attributes                         | Member of
--------------+------------------------------------------------------------+-----------
autoctl      |                                                            | {}
autoctl_node |                                                            | {}
postgres     | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

What we got in the new database is the pgautofailover extension:

pg_auto_failover=# \dx
List of installed extensions
Name      | Version |   Schema   |         Description
----------------+---------+------------+------------------------------
pgautofailover | 1.0     | public     | pg_auto_failover
plpgsql        | 1.0     | pg_catalog | PL/pgSQL procedural language
(2 rows)

For our management kit to work properly a few PostgreSQL parameters will be set:

pg_auto_failover=# alter system set log_truncate_on_rotation = 'on';
ALTER SYSTEM
pg_auto_failover=# alter system set log_filename = 'postgresql-%a.log';
ALTER SYSTEM
pg_auto_failover=# alter system set log_rotation_age = '1440';
ALTER SYSTEM
pg_auto_failover=# alter system set log_line_prefix = '%m - %l - %p - %h - %u@%d - %x';
ALTER SYSTEM
pg_auto_failover=# alter system set log_directory = 'pg_log';
ALTER SYSTEM
pg_auto_failover=# alter system set log_min_messages = 'WARNING';
ALTER SYSTEM
pg_auto_failover=# alter system set log_autovacuum_min_duration = '60s';
ALTER SYSTEM
pg_auto_failover=# alter system set log_min_error_statement = 'NOTICE';
ALTER SYSTEM
pg_auto_failover=# alter system set log_min_duration_statement = '30s';
ALTER SYSTEM
pg_auto_failover=# alter system set log_checkpoints = 'on';
ALTER SYSTEM
pg_auto_failover=# alter system set log_statement = 'ddl';
ALTER SYSTEM
pg_auto_failover=# alter system set log_lock_waits = 'on';
ALTER SYSTEM
pg_auto_failover=# alter system set log_temp_files = '0';
ALTER SYSTEM
pg_auto_failover=# alter system set log_timezone = 'Europe/Zurich';
ALTER SYSTEM
pg_auto_failover=# alter system set log_connections=on;
ALTER SYSTEM
pg_auto_failover=# alter system set log_disconnections=on;
ALTER SYSTEM
pg_auto_failover=# alter system set log_duration=on;
ALTER SYSTEM
pg_auto_failover=# select pg_reload_conf();
pg_reload_conf
----------------
t
(1 row)

What we need for the other nodes is the connection string to the monitoring node:

postgres@pg-af3:/home/postgres/ [af] pg_autoctl show uri
postgres://autoctl_node@pg-af3:5432/pg_auto_failover

Once we have that we can proceed with creating the master instance on the first host:

postgres@pg-af1:/home/postgres/ [pg120] unset PGDATABASE
postgres@pg-af1:/home/postgres/ [] sudo mkdir /u02/pgdata
postgres@pg-af1:/home/postgres/ [] sudo chown postgres:postgres /u02/pgdata
postgres@pg-af1:/home/postgres/ [] pg_autoctl create postgres --pgdata /u02/pgdata/12/PG1 --nodename pg-af1.it.dbi-services.com --monitor postgres://autoctl_node@pg-af3:5432/pg_auto_failover
INFO  Found pg_ctl for PostgreSQL 12.0 at /u01/app/postgres/product/12/db_0/bin/pg_ctl
INFO  Registered node pg-af1.it.dbi-services.com:5432 with id 1 in formation "default", group 0.
INFO  Writing keeper init state file at "/home/postgres/.local/share/pg_autoctl/u02/pgdata/12/PG1/pg_autoctl.init"
INFO  Successfully registered as "single" to the monitor.
INFO  Initialising a PostgreSQL cluster at "/u02/pgdata/12/PG1"
INFO  Postgres is not running, starting postgres
INFO   /u01/app/postgres/product/12/db_0/bin/pg_ctl --pgdata /u02/pgdata/12/PG1 --options "-p 5432" --options "-h *" --wait start
INFO  The user "postgres" already exists, skipping.
INFO  CREATE DATABASE postgres;
INFO  The database "postgres" already exists, skipping.
INFO  FSM transition from "init" to "single": Start as a single node
INFO  Initialising postgres as a primary
INFO  Transition complete: current state is now "single"
INFO  Keeper has been succesfully initialized.

Once the master if up bring up the replica on the second node:

postgres@pg-af2:/home/postgres/ [pg120] pg_autoctl create postgres --pgdata /u02/pgdata/12/PG1 --nodename pg-af2.it.dbi-services.com --monitor postgres://autoctl_node@pg-af3:5432/pg_auto_failover
17:11:42 INFO  Registered node pg-af2.it.dbi-services.com:5432 with id 2 in formation "default", group 0.
17:11:42 INFO  Writing keeper init state file at "/home/postgres/.local/share/pg_autoctl/u02/pgdata/12/PG1/pg_autoctl.init"
17:11:42 INFO  Successfully registered as "wait_standby" to the monitor.
17:11:42 INFO  FSM transition from "init" to "wait_standby": Start following a primary
17:11:42 INFO  Transition complete: current state is now "wait_standby"
17:11:47 INFO  FSM transition from "wait_standby" to "catchingup": The primary is now ready to accept a standby
17:11:47 INFO  The primary node returned by the monitor is pg-af1.it.dbi-services.com:5432
17:11:47 INFO  Initialising PostgreSQL as a hot standby
17:11:47 INFO  Running /u01/app/postgres/product/12/db_0/bin/pg_basebackup -w -h pg-af1.it.dbi-services.com -p 5432 --pgdata /u02/pgdata/12/backup -U pgautofailover_replicator --write-recovery-conf --max-rate 100M --wal-method=stream --slot pgautofailover_standby ...
17:11:49 INFO  pg_basebackup: initiating base backup, waiting for checkpoint to complete
pg_basebackup: checkpoint completed
pg_basebackup: write-ahead log start point: 0/2000028 on timeline 1
pg_basebackup: starting background WAL receiver
0/23699 kB (0%), 0/1 tablespace (/u02/pgdata/12/backup/backup_label )
136/23699 kB (0%), 0/1 tablespace (/u02/pgdata/12/backup/global/4184  )
23708/23708 kB (100%), 0/1 tablespace (...data/12/backup/global/pg_control)
23708/23708 kB (100%), 1/1 tablespace
pg_basebackup: write-ahead log end point: 0/2000100
pg_basebackup: waiting for background process to finish streaming ...
pg_basebackup: syncing data to disk ...
pg_basebackup: base backup completed

17:11:49 INFO  Postgres is not running, starting postgres
17:11:49 INFO   /u01/app/postgres/product/12/db_0/bin/pg_ctl --pgdata /u02/pgdata/12/PG1 --options "-p 5432" --options "-h *" --wait start
17:11:50 INFO  PostgreSQL started on port 5432
17:11:50 INFO  Transition complete: current state is now "catchingup"
17:11:50 INFO  Keeper has been succesfully initialized.

The next step is to start the so called keeper process (this is the process which communicates with the montoring node about state changes):

postgres@pg-af1:/home/postgres/ [] pg_autoctl run --pgdata /u02/pgdata/12/PG1
INFO  Managing PostgreSQL installation at "/u02/pgdata/12/PG1"
INFO  The version of extenstion "pgautofailover" is "1.0" on the monitor
INFO  pg_autoctl service is starting
INFO  Calling node_active for node default/1/0 with current state: single, PostgreSQL is running, sync_state is "", current lsn is "0/0".
INFO  Calling node_active for node default/1/0 with current state: single, PostgreSQL is running, sync_state is "", current lsn is "0/0".
INFO  Calling node_active for node default/1/0 with current state: single, PostgreSQL is running, sync_state is "", current lsn is "0/0".
INFO  Calling node_active for node default/1/0 with current state: single, PostgreSQL is running, sync_state is "", current lsn is "0/0".

To integrate that into systemd:

postgres@pg-af2:/home/postgres/ [PG1] pg_autoctl show systemd
20:28:43 INFO  HINT: to complete a systemd integration, run the following commands:
20:28:43 INFO  pg_autoctl -q show systemd --pgdata "/u02/pgdata/12/PG1" | sudo tee /etc/systemd/system/pgautofailover.service
20:28:43 INFO  sudo systemctl daemon-reload
20:28:43 INFO  sudo systemctl start pgautofailover
[Unit]
Description = pg_auto_failover

[Service]
WorkingDirectory = /u02/pgdata/12/PG1
Environment = 'PGDATA=/u02/pgdata/12/PG1'
User = postgres
ExecStart = /u01/app/postgres/product/12/db_0/bin/pg_autoctl run
Restart = always
StartLimitBurst = 0

[Install]
WantedBy = multi-user.target

postgres@pg-af2:/home/postgres/ [PG1] pg_autoctl -q show systemd --pgdata "/u02/pgdata/12/PG1" | sudo tee /etc/systemd/system/pgautofailover.service
[Unit]
Description = pg_auto_failover

[Service]
WorkingDirectory = /u02/pgdata/12/PG1
Environment = 'PGDATA=/u02/pgdata/12/PG1'
User = postgres
ExecStart = /u01/app/postgres/product/12/db_0/bin/pg_autoctl run
Restart = always
StartLimitBurst = 0

[Install]
WantedBy = multi-user.target

postgres@pg-af2:/home/postgres/ [PG1] systemctl list-unit-files | grep pgauto
pgautofailover.service                      disabled
20:30:57 postgres@pg-af2:/home/postgres/ [PG1] sudo systemctl enable pgautofailover.service
Created symlink /etc/systemd/system/multi-user.target.wants/pgautofailover.service → /etc/systemd/system/pgautofailover.service.

If you are on CentOS/Red Hat 8 you will also need this as otherwise the service will not start:

postgres@pg-af1:/u01/app/postgres/local/dmk/ [PG1] sudo semanage fcontext -a -t bin_t /u01/app/postgres/product/12/db_0/bin/pg_autoctl
postgres@pg-af1:/u01/app/postgres/local/dmk/ [PG1] restorecon -v /u01/app/postgres/product/12/db_0/bin/pg_autoctl

After rebooting all the nodes (to confirm that the systemd service is working as expected) the state of the cluster reports one primary and a secondary/replica as expected:

postgres@pg-af3:/home/postgres/ [af] pg_autoctl show state
Name |   Port | Group |  Node |     Current State |    Assigned State
---------------------------+--------+-------+-------+-------------------+------------------
pg-af1.it.dbi-services.com |   5432 |     0 |     1 |           primary |           primary
pg-af2.it.dbi-services.com |   5432 |     0 |     2 |         secondary |         secondary

The various states are documented here.

Remember: As this is based on PostgreSQL 12 there will be no recovery.conf on the replica. The replication parameters have been added to postgresql.auto.conf automatically:

postgres@pg-af2:/u02/pgdata/12/PG1/ [PG1] cat postgresql.auto.conf
# Do not edit this file manually!
# It will be overwritten by the ALTER SYSTEM command.
primary_conninfo = 'user=pgautofailover_replicator passfile=''/home/postgres/.pgpass'' connect_timeout=5 host=''pg-af1.it.dbi-services.com'' port=5432 sslmode=prefer sslcompression=0 gssencmode=disable target_session_attrs=any'
primary_slot_name = 'pgautofailover_standby'

That’s it for the setup. Really easy and simple, I like it. In the next post we’ll have a look at controlled switch-overs and fail-over scenarios.

Cet article pg_auto_failover: Setup and installation est apparu en premier sur Blog dbi services.

Getting Started with APEX Plugins

Jeff Kemp - Thu, 2019-10-31 22:21

There is a great deal you can build in APEX without even thinking about using a plugin. The development platform has an excellent range of built-in functionality and components so that almost all user requirements can be covered using standard APEX features. Restricting yourself to only the built-in, declarative features (i.e. “low code”) of the tool will result in applications that are easy to maintain and resistant to regression issues when upgrading. However, there will come a time when some custom code is required, whether some special data processing (using PL/SQL), some special user interface styling (using CSS), or some special user interface behaviour (using JavaScript or jQuery).

After you have built an application with (a minimum of) custom code like this, it sometimes happens that the same user requirement crops up again elsewhere; whether on another page, another application, or another workspace. The first time you encounter this you will probably just do a “copy and paste” to replicate the custom code. When you encounter the requirement a third time, you should be asking yourself, “how can I make this code more re-usable” – because chances are one day you’ll need to fix a bug or make an enhancement to that code, and you’ll want it to be easy to deploy the change wherever it has been used. This is where an APEX plugin could be a suitable solution.

This post is mostly based on a talk given in October 2019 at the Australian Oracle User Group conference in Perth, Western Australia.

What is an APEX Plugin?

An APEX Plugin is a Shared Component that can be used in any number of pages in the application. When used, it generates a Region, an Item, a Process, or a Dynamic Action; or, it may implement an Authentication or Authorization Scheme. It is a self-contained set of attributes, code, and (optionally) file attachments. It may be copied to another application, or exported as a SQL script for importing into an application in another workspace or another APEX instance.

Where can I get started?

In my opinion, a great way to get started learning about plugins is to examine a lot of plugins that others have created. The two sources I would recommend you look at are:

  1. Install the Sample Database Application – go into its Shared Components -> Plugins, and look at each of the plugins therein. Look through each attribute and examine the code; read the help for each attribute as well. Look at how the plugins are used in the application.
  2. Browse the plugins at apex.world, and install a few that seem interesting to you.

When looking at these plugins, be aware that many of them have been created by people just like you and me – at differing levels of experience and skill; so they may or may not perfectly reflect “best practice”. This especially goes for the plugins published by yours truly :). In addition, depending on how long ago a plugin was written, it might be more complex than it needs to be, as it might not take advantage of recent enhancements to the APEX plugin API.

Plugin Concepts
Plugin Types

Plugin Type – a plugin may be for a page Region, a page Item, a page Process or a Dynamic Action; or it may implement an Authentication Scheme or an Authorization Scheme. For example, if you create or install an Item Plugin, it will appear in the list of item types available for adding to a page.

PL/SQL and Callbacks – if needed, you can embed PL/SQL code that APEX will call when rendering the plugin component or for other events such as handling an AJAX callback or validating an item.

The Callbacks attributes allow you to specify the name of the function for APEX to call; these might be functions that you define within the PL/SQL Code section, or they can call functions in a database package you have defined.

The functions you define must have the exact parameters and return types as documented in the help for each Callback attribute.

Standard & Custom Attributes – some plugin types have Standard Attributes (defined by APEX), and can have additional Custom Attributes (defined by you). For example, a Region plugin has standard attributes that define whether it supports a Data Source or supports the CSS Classes attribute.

If needed you can define up to 15 custom Application attributes. These would be set by the developer when they first install the plugin into their application, and their values apply to all usages of the plugin across their application.

You can also define up to 25 custom Component attributes, which are set on each usage of the plugin separately. The developer will find these in the Attributes node.

Each Custom attribute can be any of a long list of different types, including Text, TextArea, Yes/No, Number, Checkboxes, Select list, Item name, Icon, PL/SQL Code, JavaScript Code, and many others. You can provide a default value for each attribute if needed.

Files, JS and CSS – you can attach any number of resources including JavaScript, CSS scripts, images, or anything else that are required by your plugin. To load JavaScript or CSS files on pages where your plugin is used, you can simply enter them in the File URLs to Load, or call the APEX API routines in your render PL/SQL code (e.g. APEX_JAVASCRIPT.add_library or APEX_CSS.add_file).

When you export the plugin, all the attached resources will be included in the SQL script.

Plugin Events – if your plugin needs to allow your developers to add Dynamic Actions that respond to events raised by your plugin (e.g. to respond to user actions) you can define any number of custom Events here.

Walkthroughs

I’ve written these instructions to get you started creating plugins. They assume you are already reasonably familiar with building simple applications in APEX, and are comfortable editing PL/SQL.

Resources

Getting Your Hands Dirty with TensorFlow 2.0 and Keras API

Andrejus Baranovski - Thu, 2019-10-31 15:00
Diving into technical details of the regression model creation with TensorFlow 2.0 and Keras API. In TensorFlow 2.0, Keras comes out of the box with TensorFlow library. API is simplified and more convenient to use.


Read the complete article here.

AEM Forms – Certify PDF end-up with NoSuchMethodError on bouncycastle

Yann Neuhaus - Thu, 2019-10-31 13:30

As part of an AEM project, we were working on setting up a few actions on PDF files. One of these actions was to Sign & Certify a PDF file. The basic Sign & Certify action provided by AEM is working easily by default but if you look deeper, you might get some surprise. The complexity in this case came from the fact that we absolutely needed the signature to contain a valid Time-Stamp using the Time-Stamp Protocol (TSP) as well as a valid Long-Term Validation (LTV). In this blog, I will talk about one (of the numerous) issue we faced that I believe is related only to AEM on WebLogic.

As I mentioned above, the basic Certify operation is working easily but if you do not take a closer look, it might not be TSP and/or LTV. In our case, using AEM 6.4 SP3 on WebLogic Server 12.2.1.3, we got the Certify operation to work but without TSP & LTV:

Certify PDF - TSP failed & LTV failed

Looking at the AEM Managed Server logs, you can see that the last line is an error message:

####<Aug 28, 2019 12:15:22,278 PM UTC> <Info> <com.adobe.livecycle.usermanager.sslauthprovider.SSLMutualAuthProvider> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '16' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-129013562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000055> <1566994522278> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Got Subject DN as CN=aem-dev,OU=IT,O=dbi services,L=Delemont,ST=Jura,C=CH>
####<Aug 28, 2019 12:15:25,025 PM UTC> <Info> <com.adobe.livecycle.usermanager.sslauthprovider.SSLMutualAuthProvider> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '67' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-129513562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000056> <1566994525025> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Got Subject DN as CN=aem-dev,OU=IT,O=dbi services,L=Delemont,ST=Jura,C=CH>
####<Aug 28, 2019 12:15:25,680 PM UTC> <Info> <com.adobe.formServer.config.FormServerConfigImpl> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '67' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-12C213562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000056> <1566994525680> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <FSC008: Using the database to access and persist configuration properties.>
####<Aug 28, 2019 12:15:25,681 PM UTC> <Info> <com.adobe.formServer.config.FormServerConfigImpl> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '67' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-12C213562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000056> <1566994525681> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <FSC001: The property LastCacheResetTime has been changed from  to 1555070921173>
####<Aug 28, 2019 12:15:25,681 PM UTC> <Info> <com.adobe.formServer.config.FormServerConfigImpl> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '67' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-12C213562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000056> <1566994525681> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <FSC001: The property CacheValidationTime has been changed from 0 to 1555070921058>
####<Aug 28, 2019 12:15:25,684 PM UTC> <Info> <com.adobe.formServer.common.cachemanager.CacheConfig> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '67' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-12C213562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000056> <1566994525684> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Initializing cache from default values >
####<Aug 28, 2019 12:15:26,130 PM UTC> <Info> <Common> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '67' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-130E13562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000056> <1566994526130> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000628> <Created "1" resources for pool "IDP_DS", out of which "1" are available and "0" are unavailable.>
####<Aug 28, 2019 12:15:26,141 PM UTC> <Info> <com.adobe.formServer.common.cachemanager.CacheConfig> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '67' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-12C213562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000056> <1566994526141> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Initializing cache from default values >
####<Aug 28, 2019 12:15:26,147 PM UTC> <Info> <com.adobe.formServer.common.cachemanager.CacheConfig> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '67' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-12C213562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000056> <1566994526147> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Initializing cache from default values >
####<Aug 28, 2019 12:15:26,153 PM UTC> <Info> <com.adobe.formServer.common.cachemanager.CacheConfig> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '67' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-12C213562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000056> <1566994526153> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Initializing cache from default values >
####<Aug 28, 2019 12:15:26,158 PM UTC> <Info> <com.adobe.formServer.common.cachemanager.CacheConfig> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '67' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-12C213562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000056> <1566994526158> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Initializing cache from default values >
####<Aug 28, 2019 12:15:26,571 PM UTC> <Info> <com.adobe.formServer.config.FormServerConfigImpl> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '67' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-12C213562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000056> <1566994526571> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <FSC008: Using the database to access and persist configuration properties.>
####<Aug 28, 2019 12:15:27,835 PM UTC> <Info> <com.adobe.livecycle.usermanager.sslauthprovider.SSLMutualAuthProvider> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '60' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-13A613562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000057> <1566994527835> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Got Subject DN as CN=aem-dev,OU=IT,O=dbi services,L=Delemont,ST=Jura,C=CH>
####<Aug 28, 2019 12:15:30,923 PM UTC> <Error> <com.adobe.workflow.AWS> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '67' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-12C213562811050A7F40> <7503b440-54b5-43c7-be22-0f19c434ef4c-00000056> <1566994530923> <[severity-value: 8] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <An exception was thrown with name java.lang.NoSuchMethodError message:org.bouncycastle.asn1.x509.AlgorithmIdentifier.getObjectId()Lorg/bouncycastle/asn1/ASN1ObjectIdentifier; while invoking service SignatureService and operation certify and no fault routes were found to be configured.>

 

At the same time, we also got this kind of messages:

ALC-DSC-003-000: com.adobe.idp.dsc.DSCInvocationException: Invocation error.
            at com.adobe.idp.dsc.component.impl.DefaultPOJOInvokerImpl.invoke(DefaultPOJOInvokerImpl.java:152)
            at com.adobe.idp.dsc.interceptor.impl.InvocationInterceptor.intercept(InvocationInterceptor.java:140)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.interceptor.impl.DocumentPassivationInterceptor.intercept(DocumentPassivationInterceptor.java:53)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor$1.doInTransaction(TransactionInterceptor.java:74)
            at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.execute(EjbTransactionCMTAdapterBean.java:357)
            at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.doRequired(EjbTransactionCMTAdapterBean.java:274)
            at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.__WL_invoke(Unknown Source)
            at weblogic.ejb.container.internal.SessionLocalMethodInvoker.invoke(SessionLocalMethodInvoker.java:33)
            at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.doRequired(Unknown Source)
            at com.adobe.idp.dsc.transaction.impl.ejb.EjbTransactionProvider.execute(EjbTransactionProvider.java:129)
            at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor.intercept(TransactionInterceptor.java:72)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.interceptor.impl.InvocationStrategyInterceptor.intercept(InvocationStrategyInterceptor.java:55)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.interceptor.impl.InvalidStateInterceptor.intercept(InvalidStateInterceptor.java:37)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.interceptor.impl.AuthorizationInterceptor.intercept(AuthorizationInterceptor.java:188)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.interceptor.impl.JMXInterceptor.intercept(JMXInterceptor.java:48)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.engine.impl.ServiceEngineImpl.invoke(ServiceEngineImpl.java:121)
            at com.adobe.idp.dsc.routing.Router.routeRequest(Router.java:131)
            at com.adobe.idp.dsc.provider.impl.base.AbstractMessageReceiver.routeMessage(AbstractMessageReceiver.java:93)
            at com.adobe.idp.dsc.provider.impl.vm.VMMessageDispatcher.doSend(VMMessageDispatcher.java:225)
            at com.adobe.idp.dsc.provider.impl.base.AbstractMessageDispatcher.send(AbstractMessageDispatcher.java:69)
            at com.adobe.idp.dsc.clientsdk.ServiceClient.invoke(ServiceClient.java:215)
            at com.adobe.workflow.engine.PEUtil.invokeAction(PEUtil.java:893)
            at com.adobe.idp.workflow.dsc.invoker.WorkflowDSCInvoker.transientInvoke(WorkflowDSCInvoker.java:356)
            at com.adobe.idp.workflow.dsc.invoker.WorkflowDSCInvoker.invoke(WorkflowDSCInvoker.java:159)
            at com.adobe.idp.dsc.interceptor.impl.InvocationInterceptor.intercept(InvocationInterceptor.java:140)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.interceptor.impl.DocumentPassivationInterceptor.intercept(DocumentPassivationInterceptor.java:53)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor$1.doInTransaction(TransactionInterceptor.java:74)
            at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.execute(EjbTransactionCMTAdapterBean.java:357)
            at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.doRequiresNew(EjbTransactionCMTAdapterBean.java:299)
            at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.__WL_invoke(Unknown Source)
            at weblogic.ejb.container.internal.SessionLocalMethodInvoker.invoke(SessionLocalMethodInvoker.java:33)
            at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.doRequiresNew(Unknown Source)
            at com.adobe.idp.dsc.transaction.impl.ejb.EjbTransactionProvider.execute(EjbTransactionProvider.java:143)
            at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor.intercept(TransactionInterceptor.java:72)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.interceptor.impl.InvocationStrategyInterceptor.intercept(InvocationStrategyInterceptor.java:55)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.interceptor.impl.InvalidStateInterceptor.intercept(InvalidStateInterceptor.java:37)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.interceptor.impl.AuthorizationInterceptor.intercept(AuthorizationInterceptor.java:188)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.interceptor.impl.JMXInterceptor.intercept(JMXInterceptor.java:48)
            at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
            at com.adobe.idp.dsc.engine.impl.ServiceEngineImpl.invoke(ServiceEngineImpl.java:121)
            at com.adobe.idp.dsc.routing.Router.routeRequest(Router.java:131)
            at com.adobe.idp.dsc.provider.impl.base.AbstractMessageReceiver.invoke(AbstractMessageReceiver.java:329)
            at com.adobe.idp.dsc.provider.impl.soap.axis.sdk.SoapSdkEndpoint.invokeCall(SoapSdkEndpoint.java:153)
            at com.adobe.idp.dsc.provider.impl.soap.axis.sdk.SoapSdkEndpoint.invoke(SoapSdkEndpoint.java:91)
            at sun.reflect.GeneratedMethodAccessor621.invoke(Unknown Source)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:498)
            at org.apache.axis.providers.java.RPCProvider.invokeMethod(RPCProvider.java:397)
            at org.apache.axis.providers.java.RPCProvider.processMessage(RPCProvider.java:186)
            at org.apache.axis.providers.java.JavaProvider.invoke(JavaProvider.java:323)
            at org.apache.axis.strategies.InvocationStrategy.visit(InvocationStrategy.java:32)
            at org.apache.axis.SimpleChain.doVisiting(SimpleChain.java:118)
            at org.apache.axis.SimpleChain.invoke(SimpleChain.java:83)
            at org.apache.axis.handlers.soap.SOAPService.invoke(SOAPService.java:454)
            at org.apache.axis.server.AxisServer.invoke(AxisServer.java:281)
            at org.apache.axis.transport.http.AxisServlet.doPost(AxisServlet.java:699)
            at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
            at org.apache.axis.transport.http.AxisServletBase.service(AxisServletBase.java:327)
            at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
            at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:286)
            at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:260)
            at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:137)
            at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:350)
            at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:25)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
            at com.adobe.idp.dsc.provider.impl.soap.axis.InvocationFilter.doFilter(InvocationFilter.java:43)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
            at com.adobe.idp.um.auth.filter.ParameterFilter.doFilter(ParameterFilter.java:105)
            at com.adobe.idp.um.auth.filter.CSRFFilter.invokeNextFilter(CSRFFilter.java:141)
            at com.adobe.idp.um.auth.filter.CSRFFilter.doFilter(CSRFFilter.java:132)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
            at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3706)
            at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3672)
            at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:328)
            at weblogic.security.service.SecurityManager.runAsForUserCode(SecurityManager.java:197)
            at weblogic.servlet.provider.WlsSecurityProvider.runAsForUserCode(WlsSecurityProvider.java:203)
            at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:71)
            at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2443)
            at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2291)
            at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2269)
            at weblogic.servlet.internal.ServletRequestImpl.runInternal(ServletRequestImpl.java:1705)
            at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1665)
            at weblogic.servlet.provider.ContainerSupportProviderImpl$WlsRequestExecutor.run(ContainerSupportProviderImpl.java:272)
            at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:352)
            at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:337)
            at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:57)
            at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
            at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:652)
            at weblogic.work.ExecuteThread.execute(ExecuteThread.java:420)
            at weblogic.work.ExecuteThread.run(ExecuteThread.java:360)
Caused by: java.lang.NoSuchMethodError: org.bouncycastle.asn1.x509.AlgorithmIdentifier.getObjectId()Lorg/bouncycastle/asn1/ASN1ObjectIdentifier;
            at com.adobe.livecycle.signatures.pki.timestamp.TimestampInfoBC.matchesMessageImprint(TimestampInfoBC.java:187)
            at com.adobe.livecycle.signatures.pki.timestamp.TimestampToken.validateRequest(TimestampToken.java:430)
            at com.adobe.livecycle.signatures.pki.impl.PKIOperations.createTimestamp(PKIOperations.java:562)
            at com.adobe.livecycle.signatures.service.impl.TimeStampProviderImpl.getTimestampToken(TimeStampProviderImpl.java:85)
            at com.adobe.idp.cryptoprovider.LCPKCS7Signer$1.getActualAttributes(LCPKCS7Signer.java:256)
            at com.adobe.livecycle.signatures.pki.signature.CMSPKCS7Impl.sign(CMSPKCS7Impl.java:702)
            at com.adobe.livecycle.signatures.pki.impl.PKIOperations.sign(PKIOperations.java:345)
            at com.adobe.livecycle.signatures.service.cryptoprovider.DSSPKCS7Signer.signData(DSSPKCS7Signer.java:84)
            at com.adobe.idp.cryptoprovider.LCPKCS7Signer.sign(LCPKCS7Signer.java:123)
            at com.adobe.internal.pdftoolkit.services.digsig.digsigframework.impl.SignatureHandlerPPKLite.writeSignatureAfterSave(SignatureHandlerPPKLite.java:816)
            at com.adobe.internal.pdftoolkit.services.digsig.impl.SigningUtils.doSigning(SigningUtils.java:820)
            at com.adobe.internal.pdftoolkit.services.digsig.SignatureManager.certifyWrapperAPI(SignatureManager.java:1554)
            at com.adobe.internal.pdftoolkit.services.digsig.SignatureManager.certify(SignatureManager.java:1542)
            at com.adobe.livecycle.signatures.service.impl.SignCertifyImpl.certify(SignCertifyImpl.java:894)
            at com.adobe.livecycle.signatures.service.impl.DocumentSecurityService.certify(DocumentSecurityService.java:1644)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:498)
            at com.adobe.idp.dsc.component.impl.DefaultPOJOInvokerImpl.invoke(DefaultPOJOInvokerImpl.java:118)
            ... 102 more

 

Based on the above messages, it is clear that there is a problem with some of the bouncycastle classes. This kind of thing is usually a missing class (“ClassNotFoundException“) or a conflict between two or more versions that are loaded by WebLogic (“NoSuchMethodError“) with the loaded/active version not containing the specific java method that is being called. We opened a SR with the Adobe Support (#188938) because this kind of thing shouldn’t be happening but after a few days without any meaningful update from them, I decided to look into the product myself to stop losing time on such trivial thing.

So this specific class (“org.bouncycastle.asn1.x509.AlgorithmIdentifier“) can be found in numerous jar files: apacheds*.jar, bcprov*.jar, bouncycastle*.jar, ec2*.jar, aso… I checked all these jar files on our WebLogic Server libraries as well as AEM ones and found what I believe was the issue: different versions of these jars being loaded. To confirm and before changing anything, I deployed the WebLogic CAT and found:

  • 0 conflicts in adobe-livecycle-cq-author.ear
  • 0 conflicts in adobe-livecycle-native-weblogic-x86_linux.ear
  • 5339 conflicts in adobe-livecycle-weblogic.ear

 
These numbers pretty much confirmed what I thought already. Going further, I found a few hundred conflicts related to the “org.bouncycastle.*” classes only. One of these being for the class “org.bouncycastle.asn1.x509.AlgorithmIdentifier” and it was conflicting between the following files:

  • WebLogic: $MW_HOME/oracle_common/modules/org.bouncycastle.bcprov-jdk15on.jar (1st loaded)
  • WebLogic: $MW_HOME/oracle_common/modules/org.bouncycastle.bcprov-ext-jdk15on.jar
  • AEM: $APPLICATIONS/adobe-livecycle-weblogic.ear/bcprov-151.jar

 
So what should be done to fix this? Well, a simple solution is just to force WebLogic to use the AEM provided files by default by updating the load preferences:

[weblogic@aem-node-1 ~]$ cd $APPLICATIONS
[weblogic@aem-node-1 AEM]$ 
[weblogic@aem-node-1 AEM]$ jar -xvf adobe-livecycle-weblogic.ear META-INF/weblogic-application.xml
[weblogic@aem-node-1 AEM]$ 
[weblogic@aem-node-1 AEM]$ grep -B1 "</prefer-application-packages>" META-INF/weblogic-application.xml
<package-name>org.mozilla.javascript.xmlimpl.*</package-name>
</prefer-application-packages>
[weblogic@aem-node-1 AEM]$ 
[weblogic@aem-node-1 AEM]$ sed -i 's,</prefer-application-packages>,<package-name>org.bouncycastle.*</package-name>\n&,' META-INF/weblogic-application.xml
[weblogic@aem-node-1 AEM]$ 
[weblogic@aem-node-1 AEM]$ grep -B2 "</prefer-application-packages>" META-INF/weblogic-application.xml
<package-name>org.mozilla.javascript.xmlimpl.*</package-name>
<package-name>org.bouncycastle.*</package-name>
</prefer-application-packages>
[weblogic@aem-node-1 AEM]$ 
[weblogic@aem-node-1 AEM]$ jar -uvf adobe-livecycle-weblogic.ear META-INF/weblogic-application.xml
[weblogic@aem-node-1 AEM]$ rm -rf META-INF
[weblogic@aem-node-1 AEM]$

 

What the above commands are doing is simply to add “<package-name>org.bouncycastle.*</package-name>” just before the end of the “<prefer-application-packages>” section so that WebLogic will know that it needs to use the AEM provided classes for this package and it shouldn’t use its own files. Once that is done, simply redeploy the EAR file. In my case, I was left with “only” 2442 conflicts, none regarding the bouncycastle (obviously).

After that, executing the same Certify action with the new classloader preferences resulted in no more errors:

####<Aug 28, 2019 1:12:22,359 PM UTC> <Info> <com.adobe.livecycle.usermanager.sslauthprovider.SSLMutualAuthProvider> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '109' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-1475E729A745050A7F40> <3a34648b-38e4-4ec5-8a0a-e6872bc1c6a1-00000071> <1566997942359> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Got Subject DN as CN=aem-dev,OU=IT,O=dbi services,L=Delemont,ST=Jura,C=CH>
####<Aug 28, 2019 1:12:23,702 PM UTC> <Info> <com.adobe.livecycle.usermanager.sslauthprovider.SSLMutualAuthProvider> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '116' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-147BE729A745050A7F40> <3a34648b-38e4-4ec5-8a0a-e6872bc1c6a1-00000072> <1566997943702> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Got Subject DN as CN=aem-dev,OU=IT,O=dbi services,L=Delemont,ST=Jura,C=CH>
####<Aug 28, 2019 1:12:24,199 PM UTC> <Info> <com.adobe.formServer.config.FormServerConfigImpl> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '116' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-14A8E729A745050A7F40> <3a34648b-38e4-4ec5-8a0a-e6872bc1c6a1-00000072> <1566997944199> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <FSC008: Using the database to access and persist configuration properties.>
####<Aug 28, 2019 1:12:24,199 PM UTC> <Info> <com.adobe.formServer.config.FormServerConfigImpl> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '116' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-14A8E729A745050A7F40> <3a34648b-38e4-4ec5-8a0a-e6872bc1c6a1-00000072> <1566997944199> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <FSC001: The property LastCacheResetTime has been changed from  to 1555070921173>
####<Aug 28, 2019 1:12:24,200 PM UTC> <Info> <com.adobe.formServer.config.FormServerConfigImpl> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '116' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-14A8E729A745050A7F40> <3a34648b-38e4-4ec5-8a0a-e6872bc1c6a1-00000072> <1566997944200> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <FSC001: The property CacheValidationTime has been changed from 0 to 1555070921058>
####<Aug 28, 2019 1:12:24,202 PM UTC> <Info> <com.adobe.formServer.common.cachemanager.CacheConfig> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '116' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-14A8E729A745050A7F40> <3a34648b-38e4-4ec5-8a0a-e6872bc1c6a1-00000072> <1566997944202> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Initializing cache from default values >
####<Aug 28, 2019 1:12:24,691 PM UTC> <Info> <Common> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '116' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-14F2E729A745050A7F40> <3a34648b-38e4-4ec5-8a0a-e6872bc1c6a1-00000072> <1566997944691> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000628> <Created "1" resources for pool "IDP_DS", out of which "1" are available and "0" are unavailable.>
####<Aug 28, 2019 1:12:24,704 PM UTC> <Info> <com.adobe.formServer.common.cachemanager.CacheConfig> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '116' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-14A8E729A745050A7F40> <3a34648b-38e4-4ec5-8a0a-e6872bc1c6a1-00000072> <1566997944704> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Initializing cache from default values >
####<Aug 28, 2019 1:12:24,710 PM UTC> <Info> <com.adobe.formServer.common.cachemanager.CacheConfig> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '116' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-14A8E729A745050A7F40> <3a34648b-38e4-4ec5-8a0a-e6872bc1c6a1-00000072> <1566997944710> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Initializing cache from default values >
####<Aug 28, 2019 1:12:24,717 PM UTC> <Info> <com.adobe.formServer.common.cachemanager.CacheConfig> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '116' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-14A8E729A745050A7F40> <3a34648b-38e4-4ec5-8a0a-e6872bc1c6a1-00000072> <1566997944717> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Initializing cache from default values >
####<Aug 28, 2019 1:12:24,724 PM UTC> <Info> <com.adobe.formServer.common.cachemanager.CacheConfig> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '116' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-14A8E729A745050A7F40> <3a34648b-38e4-4ec5-8a0a-e6872bc1c6a1-00000072> <1566997944724> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Initializing cache from default values >
####<Aug 28, 2019 1:12:24,928 PM UTC> <Info> <com.adobe.formServer.config.FormServerConfigImpl> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '116' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-14A8E729A745050A7F40> <3a34648b-38e4-4ec5-8a0a-e6872bc1c6a1-00000072> <1566997944928> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <FSC008: Using the database to access and persist configuration properties.>

 

The generated PDF now contained a correct Time-Stamp information but still not LTV information:

Certify PDF - TSP working & LTV failed

Finally, adding a Validation step after the Certify step in the process (in the AEM Application (LCA)) allowed both TSP and LTV information to be shown properly:

Certify PDF - TSP working & LTV working

Cet article AEM Forms – Certify PDF end-up with NoSuchMethodError on bouncycastle est apparu en premier sur Blog dbi services.

AEM Forms – WebLogic Clustering synch issue for Workbench 2-way-SSL

Yann Neuhaus - Thu, 2019-10-31 13:00

In a previous blog, I described the process to setup the AEM Forms to allow the AEM Workbench to connect to AEM using “2-way-SSL”. This setup is normally independent of the Application Server that you are using to host AEM. However, I already faced an issue (other than this one) which was caused by the 2-way-SSL setup for the Workbench in case of a WebLogic Cluster has been used to host AEM.

As mentioned in previous blog, I’m not an AEM expert but I know a few things about WebLogic so the idea here was to setup a fully functional WebLogic Cluster composed of two Managed Servers on two hosts/machines, test it properly and then install the AEM Forms application on top of it. Obviously, AEM Forms has been configured behind a Load Balancer for this purpose. At this point, AEM Forms was working perfectly in HA and stopping one of the nodes wasn’t a problem.

Then I configured the Workbench for 2-way-SSL and I did so while being connected to the AEM Node1 in Workbench, creating the Hybrid Domain in the AEM AdminUI Node1, aso… At the end of the setup, the AEM Workbench was working properly with the 2-way-SSL setup as well so it looked like the setup was completed. Just to be sure, I stopped the AEM Node1 and try to login to the AEM Workbench with the exact same parameters (same keystore, same truststore, same passwords) except for the target Server which I switched to the AEM Node2. Doing so, the login failed and I could see in the AEM Node2 Managed Server logs the following message:

####<Feb 12, 2019 2:14:46,277 PM UTC> <Info> <EJB> <aem-node-2> <msAEM-02> <[ACTIVE] ExecuteThread: '76' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <81fe4dac-31f0-4c25-bf37-17d5b327a901-0000005e> <1549980886277> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-010227> <EJB exception occurred during invocation from home or business: com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionBMTAdapter_fw85em_Intf generated exception: ALC-DSC-124-000: com.adobe.idp.dsc.DSCAuthorizationException: User does not have the Service Read Permission.>

 
Just looking at this message, it’s clear that the user account that is working properly for the AEM Node1 isn’t working for the AEM Node2. After some investigation, it looked like the Hybrid Domain wasn’t shown on the AEM AdminUI Node2, for some reason… Both nodes are using the same Oracle Database and the same GDS (Global Document Storage) path so I thought that the issue might be related to a cache somewhere in AEM. Therefore, I thought about re-creating the Hybrid Domain but I just cancelled this move right away because I assume it could have bring me more trouble than solution (I didn’t want to create 2 objects with the same name, avoid corruption or whatever…):

  • Open the AEM AdminUI Node2 (Node1 is still down) (https://<AEM_HOST_2>:<AEM_PORT>/adminui)
  • Login with an administative account (E.g.: administrator)
  • Navigate to: Settings > User Management > Domain Management
    • -> Only 1 domain is displayed, the default one: DefaultDom
  • Click on “New Hybrid Domain
    • Click on “Cancel”

 
After doing that, the Hybrid Domain (the one created in this blog, named “SSLMutualAuthProvider“) magically appeared so I assume that it forced a synchronization and an update of the cache on the AEM Node2. Trying again a login to the AEM Workbench without changing the parameters printed the following on the AEM Node2 Managed Server logs:

####<Feb 12, 2019 2:30:43,208 PM UTC> <Info> <com.adobe.livecycle.usermanager.sslauthprovider.SSLMutualAuthProvider> <aem-node-2> <msAEM-02> <[ACTIVE] ExecuteThread: '117' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-24A18C6CA9D79C032EFA> <81fe4dac-31f0-4c25-bf37-17d5b327a901-00000067> <1549981843208> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Got Subject DN as CN=aem-dev,OU=IT,O=dbi services,L=Delemont,ST=Jura,C=CH>

 
The above message means that the login is successful and Workbench is able to load the data from AEM properly. I guess that there are other ways to fix this issue. There is a “Sync Now” as well as a “Refresh” button on the Domain Management page of the AdminUI so maybe this would have done the same thing and forced a synchronization… I must admit that I first thought about re-creating the Hybrid Domain but cancelled that and since it solved my issue, I couldn’t test more, unfortunately. A restart of the AEM Node2 is also sufficient to force a refresh but this takes a few minutes and it requires a downtime so it’s not ideal.

Cet article AEM Forms – WebLogic Clustering synch issue for Workbench 2-way-SSL est apparu en premier sur Blog dbi services.

IOT Hash

Jonathan Lewis - Thu, 2019-10-31 09:59

It’s another of my double-entendre titles. The optimizer can turn a hash join involving an index-organized table into a real performance disaster (though you may have to help it along the way by using a silly definition for your primary key columns). This post was inspired by a question posted on the Oracle Developer Community forum recently so the table and column names I’ve used in my model reflect (almost, with a few corrections) the names used in the post.

We start with a simple requirement expressed through the following SQL:


rem
rem     Script:         iot_hash.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2019
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

insert
        /*+
                qb_name(insert)
        */
into t_iot(
        id, inst_id, nr_time,
        o_time, st, null_col, svname
)
select
        /*+
                qb_name(main)
                unnest(@subq)
                leading(@sel$a93afaed apar@main ob@subq)
                use_hash(@sel$a93afaed ob@subq)
                swap_join_inputs(@sel$a93afaed ob@subq)
                index_ss_asc(@sel$a93afaed ob@subq (t_iot.st t_iot.inst_id t_iot.svname))a
        */
        apar.id,
        'UP',
        to_date('2019-10-24','yyyy-mm-dd'),
        to_date('1969-12-31','yyyy-mm-dd'),
        'IDLE',
        null,
        'tkt007.jj.bb.com'
from
        t_base apar
where
        apar.id not in (
                select
                        /*+
                                qb_name(subq)
                        */
                        id
                from
                        t_iot ob
                where
                        inst_id = 'UP'
        )
and     nvl(apar.gp_nm,'UA') = 'UA'
and     rownum <= 5000
/

The requirement is simple – insert into table t_iot a set of values dictated by a subset of the rows in table t_base if they do not already exist in t_iot. To model the issue that appeared I’ve had to hint the SQL to get the following plan (which I pulled from memory after enabling rowsource execution stats):


---------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name        | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |             |      1 |        |   296 (100)|      0 |00:00:00.03 |     788 |    148 |       |       |          |
|   1 |  LOAD TABLE CONVENTIONAL | T_IOT       |      1 |        |            |      0 |00:00:00.03 |     788 |    148 |       |       |          |
|*  2 |   COUNT STOPKEY          |             |      1 |        |            |    100 |00:00:00.03 |      99 |     91 |       |       |          |
|*  3 |    HASH JOIN RIGHT ANTI  |             |      1 |    100 |   296   (2)|    100 |00:00:00.03 |      99 |     91 |    14M|  1843K|   15M (0)|
|*  4 |     INDEX SKIP SCAN      | T_IOT_STATE |      1 |  12614 |   102   (0)|  10000 |00:00:00.01 |      92 |     91 |       |       |          |
|*  5 |     TABLE ACCESS FULL    | T_BASE      |      1 |    100 |     2   (0)|    100 |00:00:00.01 |       7 |      0 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(ROWNUM<=5000)
   3 - access("APAR"."ID"="ID")
   4 - access("INST_ID"='UP')
       filter("INST_ID"='UP')
   5 - filter(NVL("APAR"."GP_NM",'UA')='UA')

The optimizer has unnested (as hinted) the subquery and converted it to an anti-join using a right hash anti-join. Take a look at the Used-mem for the hash join – would it surprise you to learn that the total size of the (not compressed in any way) IOT, and all its indexes, and the t_base table together total less than 4 MB. Something dramatically awful has happened in the hash join to generated a requirement of 14MB. (In the case of the OP this appeared as an unexpected 5GB written to the temporary tablespace.)

Before I address the source of the high memory usage, take a close look at the Predicate Information, particularly operation 3, and ask yourself what the definition of index t_iot_state might be. The predicate joins t_base.id to t_iot.id, and here’s the code to create both tables and all the indexes.

create table t_iot (
        nr_time         timestamp,
        id              varchar2(1024),
        inst_id         varchar2(200),
        o_time          timestamp,
        st              varchar2(200),
        null_col        varchar2(100),
        svname          varchar2(200),
        constraint t_iot_pk primary key(nr_time, id, inst_id)
)
organization index
/

insert into t_iot
select
        sysdate,
        dbms_random.string('l',10),
        'UP',
        sysdate,
        'IDLE',
        null,
        rpad('x',25,'x')
from
        all_objects
where
        rownum <= 1e4 -- > hint to avoid wordpress format issue
/

create index t_iot_state on t_iot(st, inst_id, svname); 
create index idx2        on t_iot(id, inst_id, svname);

create table t_base(
        id              varchar2(400) not null,
        gp_nm           varchar2(200)
)
/

insert into t_base
select
        dbms_random.string('l',10),
        'UA'
from
        all_objects
where
        rownum <= 100 -- > hint to avoid wordpress format issue
;


begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 't_iot',
                cascade     => true,
                method_opt  => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 't_base',
                cascade     => true,
                method_opt  => 'for all columns size 1'
        );
end;
/


The index t_iot_state that Oracle has used in the hash join is defined on the columns (st, inst_id, svname) – so the predicate is doing a comparison with a column that’s not in the index! At least, it’s not visibly declared in the index; but this is a secondary index on an IOT, and IOTs don’t have “normal” rowids, the rowid in a secondary index is the value of the primary key (plus a “block guess”). So the columns in the index (even though not declared in the index) are: (st, inst_id, svname, {nr_time, id, inst_id, blockguess}). So this index does supply the required id column.

Side note: you’ll see in the list of columns above that inst_id appears twice. In fact (since Oracle 9, I think) the code to handle secondary indexes has been smart enough to avoid this duplication. If the secondary index contains columns from the primary key then the “rowid” doesn’t store those columns, the code knows how to construct the primaryh key value from the stored PK columns combined with the needed columns from the index entry. This can make IOTs a very nice choice of implementation for “intersection” tables that are used to represent many-to-many joins between two other tables.

Unfortunately this “rowid” is the explanation for the massive memory demand. Take a look at the “Column Projection Information” for my execution plan:


Column Projection Information (identified by operation id):
-----------------------------------------------------------
   2 - "APAR"."ID"[VARCHAR2,400], "APAR"."GP_NM"[VARCHAR2,200], ROWNUM[8]
   3 - (#keys=1) "APAR"."ID"[VARCHAR2,400], "APAR"."GP_NM"[VARCHAR2,200]
   4 - "OB".ROWID[ROWID,1249], "NR_TIME"[TIMESTAMP,11], "ID"[VARCHAR2,1024], "INST_ID"[VARCHAR2,200], "OB".ROWID[ROWID,1249]
   5 - "APAR"."ID"[VARCHAR2,400], "APAR"."GP_NM"[VARCHAR2,200]

The interesting line is operation 4. A hash join takes the rowsource from its first child (the build table) and creates an in-memory hash table (which may spill to disc, of course), so if I see an unreasonable memory allocation (or unexpected spill to disc) a good starting point is to look at what the first child is supplying. In this case the first child seems to be saying that it’s supplying (or allowing for) nearly 3,700 bytes to be passed up to the hash join.

On closer inspection we can see it’s reporting the “rowid” twice, and also reporting the three columns that make up that rowid. I think it’s reasonable to assume that it’s only supplying the rowid once, and maybe it’s not even supplying the other three columns because they are embedded in the rowid. Doing a quick arithmetic check, let’s multiply the size of the rowid by the value of A-rows: 1,249 * 10,000 = 12,490,000. That’s pretty close to the 14MB reported by the hash join in operation 3.

Hypothesis – to get at the id column, Oracle has used this index (actually a very bad choice of those available) to extract the rowid and then passed the rowid up to the parent in a (length padded) fixed format. Oracle has then created a hash table by extracting the id column from the rowid and building the hash table on it but also carrying the length-padded rowid into the hash table.  Possible variants on this theme are that some or all of the other columns in the Column Projection Information are also passed upwards so that the id doesn’t have to be extracted, but if they are they are not padded to their maximum length.

A simple test that this is broadly the right assumption is to re-run the model making the declared length of the rowid much larger to see what happens to the memory allocation. Changing the inst_id declaration from 200 bytes to 1000 bytes (note the stored value is only the 2 bytes needed for the value ‘UP’) the Used-mem jumps to 23 MB (which is an increment very similar to 800 * 10,000).  You’ll note that I chose to experiment with a column that wasn’t the column used in the join. It was a column in the secondary index definition, though, so another test would be to change the nr_time column from a timestamp (11 bytes) to a large varchar2, so I re-ran the test declaring the nr_time as a varchar2(1000) – reverting the inst_id to varchar2(200) – and the Used-mem increased to 25MB.

Preliminary Conclusion

If Oracle uses the contents of the rowid of a secondary index on an IOT in a join then it constructs a fixed format version for the rowid by padding every column in the primary key to its maximum length and concatenating the results. This can have catastrophic side effects on performance if you’ve declared some very long columns “just in case”. Any time you use index organized tables you should remember to check the Column Projection Information in any execution plans that use secondary indexes in case they are passing a large, padded, primary key through the plan to a point where a blocking operation (such as a hash join or merge join) has to accumulate a large number of rows.

Footnote

In my test case I had to hint the query heavily to force Oracle into the path I wanted to demonstrate.

It’s surprising that the optimizer should have chosen this path in the OP’s system, given that there’s another secondary index that contains the necessary columns in its definition. (So one thought is that there’s a statistics problem to address, or possibly the “good” index is subject to updates that make it become very inefficient (large) very quickly.)

Another oddity of the OP’s system was that Oracle should have chosen to do a right hash anti-join when it looked as if joining the tables in the opposite order would produce a much smaller memory demand and lower cost – there was an explict swap_join_inputs() hint in the Outline Information (so copying the outline into the query and changing that to no_swap_join_inputs() might have been abother viable workaround.) In the end the OP hinted the query to use a nested loop (anti-)join from t_base to t_iot – which is another way to avoid the hash table threat with padded rowids.

 

How to Record your Own Music: a Beginner’s Manual

VitalSoftTech - Thu, 2019-10-31 09:56

If you’re an aspiring musician or independent artist, you might be wondering how to record your own music at home through a DIY music production process. By doing this, you could potentially save a lot of money, time, and effort that goes into producing a song. In the 21st century, there’s an app available for […]

The post How to Record your Own Music: a Beginner’s Manual appeared first on VitalSoftTech.

Categories: DBA Blogs

Database Design Question

Tom Kyte - Thu, 2019-10-31 09:47
Hello, Ask Tom Team. I have a table A with columns (client_id,invoice_number,invoice_type) with a composite primary key (client_id,invoice_number). At business level there are 5 invoice_types (70,71,72,73,74). The invoice_number always brings the ...
Categories: DBA Blogs

finding out the source of the data in a table

Tom Kyte - Thu, 2019-10-31 09:47
Hi Tom, I am working on a database application, and i need to know how a table is being populated in the database schema. I have tried querying xxx_dependencies and xxx_source but of no use. I believe that this table might be populated from an...
Categories: DBA Blogs

How to change the plan of a query in execution??

Tom Kyte - Thu, 2019-10-31 09:47
So I often face an issue when a query generates an execution plan assuming wrong carnality and since it assumes it as 1 it goes into a M3RG3 CART3SIAN join. Now if I gather the stats the query does not automatically picks up the new plan. Is there a ...
Categories: DBA Blogs

List of event codes for traces

Tom Kyte - Thu, 2019-10-31 09:47
Hello Masters, Can you give me a link on docs.oracle.com where are listed all the system event codes for the trace like 10046, 10053... and, most important, with there signification? I know the 10046, 10053 codes but I am sure there are many ...
Categories: DBA Blogs

Performance of a VIEW on multiple tables (historical plus current)

Tom Kyte - Thu, 2019-10-31 09:47
Good afternoon, I have a customer that manage millions of data with a lot of tables. The thing is that we have a big table from 2013 until now which we would like to divided in two, one for the last three months and one for the rest (which we call...
Categories: DBA Blogs

Restrict Application access to developers in same workspace

Tom Kyte - Thu, 2019-10-31 09:47
How to show a user only certain amount of applications in app builder(i.e user should not be able to see all applications in app builder)? In a Oracle Apex workspace, I need to create a new user(admin role) such that the new user can only see sele...
Categories: DBA Blogs

Find if event spanning several dates happened via SQL

Tom Kyte - Thu, 2019-10-31 09:47
Hi Tom, I have data like below <code> event_flag event_date 1 date1 1 date2 0 date3 1 date4 0 date5 0 date6 1 date7 1 date8 1 date9 ...
Categories: DBA Blogs

pragma autonomous_transaction within procedure before creating synonyms

Tom Kyte - Thu, 2019-10-31 09:47
Hi Tom, I have created a stored procedure with in oracle package which creates list of synonyms depending upon the change of dblink server name. I need to execute this procedure to create/replace synonym pointing to another dblink server. My quest...
Categories: DBA Blogs

Undo Tablespaces.

Tom Kyte - Thu, 2019-10-31 09:47
Hi Tom, Waiting to ask u this question. What is a Undo Tablespace in 9i. Is this similar to Rollback Segments. What are NonStandard Block sizes Why that non-Standarad. Why am i not able to create a RS on a Locally Managed Automatically Si...
Categories: DBA Blogs

How to count no of records in table without count?

Tom Kyte - Thu, 2019-10-31 09:47
2)How to count no of records in table without count? ---Actually,this question asked when i had attended an interview in Dell company.I don't know why they people are asked these type of questions,but i said an answer like in my own way. --->...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator