Speeding up Initial data load for Oracle to PostgreSQL using Goldengate and copy command

Oracle Goldengate supports Oracle to PostgreSQL migrations by supporting PostgreSQL as a target database, though reverse migration i.e PostgreSQL to Oracle is not supported. One of the key aspect of these database migrations is initial data load phase where full tables data have to copied to the target datastore. This can be a time consuming activity with time taken to load varying based on the table sizes. Oracle suggests to use multiple Goldengate processes to improve the database load performance or to use native database utilities to perform faster bulk-loads.

To use a database bulk-load utility, you use an initial-load Extract to extract source records from the source tables and write them to an extract file in external ASCII format. The file can be read by Oracle’s SQL*Loader, Microsoft’s BCP, DTS, or SQL Server Integration Services (SSIS) utility, or IBM’s Load Utility (LOADUTIL).

Goldengate for PostgreSQL doesn’t provide native file loader support like bcp for MS SQL and sqlloader for Oracle. As an alternative, we can use FORMATASCII option to write data into csv files (or any custom delimiter) and then load them using PostgreSQL copy command .This approach is not automated approach and you will have to ensure that all files are loaded into target database.

In this post, we will evaluate 2 approaches i.e using Multiple replicat Processes and using ASCII dump files with PostgreSQL copy command to load data and compare their performance. Below diagram shows both the approaches

Description of initsyncbulk.jpg follows
Ref: -https://docs.oracle.com/goldengate/1212/gg-winux/GWUAD/wu_initsync.htm#GWUAD561

To compare the scenarios, I created a test table with 200M rows(12GB) and used a RDS PostgreSQL instance (db.r3.4xlarge with 10k PIOPS)

CREATE TABLE scott.Big_table (
id NUMBER,
small_number NUMBER(5),
big_number NUMBER,
short_string VARCHAR2(50),
created_date DATE,
CONSTRAINT big_table_pkey PRIMARY KEY (id)
) tablespace LRG_TBSP;

--create sequence for PK

create sequence scott.big_table_seq start with 1 increment by 1 cache 500 nomaxvalue;
-- Load data
INSERT /*+ APPEND */ INTO scott.Big_table
SELECT scott.big_table_seq.nextval AS id,
TRUNC(DBMS_RANDOM.value(1,5)) AS small_number,
TRUNC(DBMS_RANDOM.value(100,10000)) AS big_number,
DBMS_RANDOM.string('L',TRUNC(DBMS_RANDOM.value(10,50))) AS short_string
TRUNC(SYSDATE + DBMS_RANDOM.value(0,366)) AS created_date
FROM dual
CONNECT BY level <= 10000;
COMMIT;

INSERT /*+ APPEND */ INTO scott.Big_table
SELECT scott.big_table_seq.nextval AS id,
small_number,
big_number,
short_string,
long_string,
created_date
FROM scott.Big_table;
COMMIT;

--PostgreSQL Table

CREATE TABLE booker.Big_table (id bigint,
small_number int,
big_number bigint,
short_string VARCHAR(50),
created_date TIMESTAMP,
CONSTRAINT big_table_pk PRIMARY KEY (id) )

Approach 1 : Using Oracle Goldengate multiple replicat processes to load data

In this approach, I used multiple Oracle Goldengate Replicat processes (8) using @range filter to load data into PostgreSQL.

We were able to get 5k inserts/sec per thread and were able to load the table in ~88 mins with 8 replicat processes.
One key point to remember is that if you are working with EC2 and RDS databases, you should have EC2 machine hosting trail files and RDS instance in same AZ. During the testing, we noticed that insert rate dropped drastically (~800 insert per sec) when using cross AZ writes. Below is replicat parameter file used for performing data load.

SpecialRUN
END Runtime
SETENV (NLSLANG=AL32UTF8)
SETENV (NLS_LANG="AMERICAN_AMERICA.AL32UTF8")
SETENV ( PGCLIENTENCODING = "UTF8" )
SETENV (ODBCINI="/opt/app/oracle/product/ggate/12.2.0.1/odbc.ini" )
TARGETDB GG_Postgres, USERIDALIAS pguser
Extfile /fs-a01-a/databases/ggate/initload/i5
HANDLECOLLISIONS
DISCARDFILE /opt/app/oracle/product/ggate/12.2.0.1/direrr/rinit1.dsc, APPEND, megabytes 20
reportcount every 60 seconds, rate
BATCHSQL;
MAP scott.big_table, TARGET scott.big_table, FILTER (@RANGE (1,8));;

You will need to create additional replicat process files by making change to the range clause.e.g FILTER (@RANGE (2,8)), FILTER (@RANGE (3,8)), etc.

Approach 2: Data load using PostgreSQL copy command

In second approach, we used parameter file with FORMATASCII option(refer to below snippet) for creating a Goldengate Extract process which dumped the data with ‘|’ delimiter and then used PostgreSQL copy command to load data from these dump files.

Extract Parameter file

SOURCEISTABLE
SETENV (ORACLE_SID=ggpoc)
SETENV (NLSLANG=AL32UTF8)
SETENV (NLS_LANG="AMERICAN_AMERICA.AL32UTF8")
SETENV (ORACLE_HOME=/opt/app/oracle/product/11.2.0.4/A10db)
SETENV (TNS_ADMIN=/opt/app/oracle/local/network/)
USERIDALIAS gguser
RMTHOST xx.222.xx.78, MGRPORT 8200, TCPBUFSIZE 100000, TCPFLUSHBYTES 300000
RMTHOSTOPTIONS ENCRYPT AES256
FORMATASCII,NOHDRFIELDS,NOQUOTE,NONAMES, DELIMITER '|'
RMTFILE /fs-a01-a/databases/ggate/initload/i4 , megabytes 1000
reportcount every 60 seconds, rate
TABLE scott.BIG_TABLE;

With above parameter file, Goldengate Extract process would send data to remote system and store the data in dump files. These files are then loaded into PostgreSQL using \copy command.

psql> \copy scott.big_table from '/fs-a01-a/databases/ggate/initload/i4000000' with DELIMITER '|';

Data load took  21 mins, which is nearly 4x faster than initial approach. If you remove the Primary key index, then it drops the time taken to ~9 mins to load 200M POC table.

 

Update:

Oracle GoldenGate 19.1 comes with DataDirect 7.1 PostgreSQL Wire Protocol ODBC driver for PostgreSQL connectivity. You can now add a parameter “BatchMechanism=2” to speed up the inserts. After setting this parameter, odbc driver will start batching inserts into memory buffer and will insert them together instead of single row inserts. You can find details here

To add the parameter, update odbc.ini and add BatchMechanism under the database section

 

[apgtarget]
Driver=/oracle/product/ggate/19.1.0.0/lib/GGpsql25.so
Description=DataDirect 7.1 PostgreSQL Wire Protocol
BatchMechanism=2
Database=pgbench
HostName=xxx.xxxxxx.us-east-1.rds.amazonaws.com
PortNumber=5432


In my testing, I saw that insert rate increased from 5.3K rows per second to 35K i.e nearly 7x increase. I also noticed that WriteIOPS on this Aurora Instance increased from 20K to 80-100K IOPS

Saving AWS RDS and Redshift Billing Costs using Reserved Instances

 

In this post, we would understand what are reserved instances, when do you need them and various ways to monitor them in your environment.

As more enterprises are embracing AWS cloud technologies, we need to understand AWS’s breadth of services and pricing options to right size our compute resources and run efficient infrastructure. Cost optimization is one of the 5 pillars of AWS Well-Architected Framework, a framework developed by AWS to help cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications. If you would like to read more about the Cost optimization pillar, you can refer to this whitepater.

Introduction

Reserved Instance(RI) is a billing concept which provides pricing discounts for your running instances in AWS accounts to help lower costs. In case of EC2, they also provide capacity reservation i.e AWS will ensure that you always have access to EC2 capacity when you need it, for as long as you need it . The discounted usage price is fixed for the term of the RI, allowing you to predict costs over the term of the reservation. If you are expecting consistent heavy use (e.g., Database Services), RIs can provide significant cost savings as compared to using On-Demand instances. Important point to note is that RI is not a physical instance.i.e to achieve savings, you don’t have to modify your running instance.

When you purchase RI, the RI is automatically applied to running RDS/Redshift nodes with the same instance parameters (Region, Engine/Node Type, RDS DB Instance Class, and Instance Count). RI does not renew automatically; you can continue to use the RDS/Redshift instances/nodes without interruption, but you will be charged On-Demand rates if you do not have sufficient number of RI counts in your AWS account. RI can be visualised as discount coupons that can be applied to your final monthly bill.

Below table provides a list of parameters that you can set when purchasing RI. If there is mismatch in any of the parameter, RI discount will not be applied.

Database Type RI Region RI Type RI Class RI Count
Amazon RDS Region Engine Type DB Instance Class Instance Count
Redshift Region   Node Type Node Count

For example, let’s consider that you have been running 2 PostgreSQL RDS (r3.db.4xlarge) instances and 2 MySQL RDS (r3.db.4xlarge) instances in us-east-1 since December 2017. On January 1 2018, you purchased RI for 3 PostgreSQL r3.db.4xlarge instances in us-east1. At the end of January 2018, your monthly bill would include RDS RI discount on 2 PostgreSQL RDS instances since you only have 2 running PostgreSQL RDS instances in us-east-1 region. As a result, your January 2018 monthly bill would include 2 MySQL RDS with On-Demand rate and 1 unused RI available for RDS PostgreSQL database engine (see Table 2).

Region Engine Type DB Instance Class Running Instance RI Available RI Applied
us-east-1 PostgreSQL db.r3.4xlarge 2 3 2
us-east-1 MySQL db.r3.4xlarge 2 0 0

In order to optimize your monthly billing for all the RDS instances, you would need to purchase additional RDS RI. To do so, you would purchase additional RI in us-east-1 with RDS MySQL database Engine Type, Instance type/size and count.

RDS Instance Size Flexibility

Amazon RDS Reserved Instances provide size flexibility for the MySQL, MariaDB, PostgreSQL, and Amazon Aurora database engines. With size flexibility, your RI’s discounted rate will automatically apply to usage of any size in the instance family (using the same database engine).

e.g If you are running a db.r4.8xlarge with corresponding RI ,and have now decided to upgrade to db.r4.16xlarge, then you can purchase reservation for 1 db.r4.8xlarge to avoid on-demand charges. Below diagram tries to depict that with 1 db.r4.16xlarge reservation, you can run multiple combinations – (2 db.r4.8xlarge ) or (4 db.r4.4xlarge) or (1 db.r4.8xlarge and 2 db.r4.4x.large).

 

AWS uses Normalized units to arrive at these calculations. The following table shows the number of normalized units for each DB instance size.

Instance Size Single-AZ Normalized Units Multi-AZ Normalized Units
micro 0.5 1
small 1 2
medium 2 4
large 4 8
xlarge 8 16
2xlarge 16 32
4xlarge 32 64
8xlarge 64 128
10xlarge 80 160
16xlarge 128 256

When to Purchase Reserved Instance?

In order to utilise Reservations optimally, buy RI instances if you plan to use resource for long duration. AWS provides 1-year and 3-year offerings along with option to pay No upfront, partial upfront and All Upfront and hourly rates varies based on the option chosen. You can refer to Redshift Pricing and RDS Pricing for the rates.

Note:- Redshift and RDS RI do not come with a capacity guarantee as they are only a billing discount.

Considering that these are fixed commitments, you will be charged regardless of usage. Therefore, if the resource is going to be short-lived or you are not sure if it’s the right instance type, then it would be recommended to use On-Demand instances. Also, if you need to upgrade capacity for short duration i.e say for 10 days in a calendar year, it would make sense to pay on-demand charges for that extra capacity rather than purchasing reservation. You can make use of AWS Simple Monthly Calculator to arrive at the total cost and make informed decision.

Monitoring

You should monitor your reservations as there could be mismatch which could impact your monthly bill. Mismatch could be due to following scenarios:

  • Scaling: In case of scaling activity like adding new RDS read replica or adding additional nodes to cluster (Redshift), new reservations need to be bought else there can be mismatch in Total running vs Total Reserved cluster.
  • Expired Reservations: Reservations have fixed term of 1 or 3 years and expire after completion of term. They are not auto-renewed which can lead to resources running with On-Demand pricing

I have listed below commands to get information about running and reserved resources (RDS and Redshift). You can also get this information from AWS console under the respective service.

List running RDS DB instances

$aws rds describe-db-instances --query  "DBInstances[*].[DBInstanceIdentifier,DBInstanceClass,Engine,DBInstanceStatus,MultiAZ]" --output table

List RDS reservations

$aws rds describe-reserved-db-instances --query "ReservedDBInstances[*].[ReservedDBInstanceId,ProductDescription,DBInstanceClass,StartTime,Duration,DBInstanceCount,MultiAZ]" --output table

List Running Redshift Clusters

$aws redshift describe-clusters --query "Clusters[*].[ClusterIdentifier,NodeType,ClusterStatus,ClusterCreateTime,NumberOfNodes]" --output table

List Active Redshift Reservations

$ aws redshift describe-reserved-nodes --query "ReservedNodes[?State=='active'].[ReservedNodeOfferingId,NodeType,StartTime,Duration,NodeCount]" --output table

References

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithReservedDBInstances.html

https://docs.aws.amazon.com/redshift/latest/mgmt/purchase-reserved-node-instance.html

Linux: ssh equivalence and SELinux

This is quick post summarizing issues encountered while setting up ssh equivalence on EC2 instance. I was setting up  RHEL7 EC2 instances and followed below procedure to setup ssh equivalence

  • Generate rsa key-pair using ssh-keygen -t rsa on both hosts
  • Copy the public keys to the remote server in authorized_keys file
  • Modify file permission to 600

But when I tried to perform ssh to remote host , it failed with following error.

Permission denied (publickey,gssapi-keyex,gssapi-with-mic).

I verified directory (.ssh) and file permissions were correct. Then, I checked for SELinux context using ls – Z option.

-bash-4.2$ ls -lZ *
-rw-r--r--. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 authorized_keys
-rw-------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 id_rsa
-rw-r--r--. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 id_rsa.pub
-rw-r--r--. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 known_hosts

As per above output, these files are running with postgresql_db_t type context. I used getenforce to verify that SELinux was in enforcing mode on this host. It can also be verified by viewing contents of /etc/selinux/config .

# cat /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of three two values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted

I decided to use restorecon command to restores SELinux security context for files and directories to their default values .

-bash-4.2$ restorecon -Rv /var/lib/pgsql/.ssh/
restorecon reset /var/lib/pgsql/.ssh context unconfined_u:object_r:postgresql_db_t:s0->unconfined_u:object_r:ssh_home_t:s0
restorecon reset /var/lib/pgsql/.ssh/id_rsa context unconfined_u:object_r:postgresql_db_t:s0->unconfined_u:object_r:ssh_home_t:s0
restorecon reset /var/lib/pgsql/.ssh/id_rsa.pub context unconfined_u:object_r:postgresql_db_t:s0->unconfined_u:object_r:ssh_home_t:s0
restorecon reset /var/lib/pgsql/.ssh/authorized_keys context unconfined_u:object_r:postgresql_db_t:s0->unconfined_u:object_r:ssh_home_t:s0
restorecon reset /var/lib/pgsql/.ssh/known_hosts context unconfined_u:object_r:postgresql_db_t:s0->unconfined_u:object_r:ssh_home_t:s0

-bash-4.2$ ls -lZ *
-rw-r--r--. postgres postgres unconfined_u:object_r:ssh_home_t:s0 authorized_keys
-rw-------. postgres postgres unconfined_u:object_r:ssh_home_t:s0 id_rsa
-rw-r--r--. postgres postgres unconfined_u:object_r:ssh_home_t:s0 id_rsa.pub
-rw-r--r--. postgres postgres unconfined_u:object_r:ssh_home_t:s0 known_hosts

As you can see, restorecon restored permission by changing type from postgresql_db_t to ssh_home_t. I performed ssh again and it worked !

Reference – https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-security-enhanced_linux-working_with_selinux-selinux_contexts_labeling_files

 

Querying RDS PostgreSQL logs using Amazon Athena

PostgreSQL logs contains useful information which can help in troubleshooting performance issues,auditing and performing historical analysis. Some of items which can be logged are listed below

  • Client connection and disconnection messages
  • Failed DB connection request
  • Database sql statement errors
  • Database Fatal Errors
  • Auditing Information
  • Database Lock history
  • Long running queries and Query plan logging
  • High Temp/Sort Usage
  • Checkpoint Information
  • Long running Auto-Vaccum operations

RDS PostgreSQL is AWS managed web service which provides access to relational database server but not the underlying hosted operation system.  Therefore, we need to interact with DB using AWS console/API’s for any maintenance activity or to view and download logs. If we now want to use the logs for troubleshooting, we will have to download the logs to our machines and go through logs. This is not scalable solution as each user will be copying logs on their desktops. Another challenge is retention of logs, as RDS retains these logs for max 7 days and rotates after it. To solve these challenges, we can devise a way to store these logs on S3 for long term retention and will use Amazon Athena to read these logs directly. Let’s do this now 🙂

As mentioned, PostgreSQL logs can be downloaded using console or API’s e.g you can use download_db_log_file_portion method in boto3 (python library provided by AWS) to download Database logs in 1MB chunk.  Here is sample code which uses boto3 to copy these logs (you can use this to build custom solution which suits your need.) We can schedule this as lambda function at regular interval using cloudwatch events.

Following parameters are required to log extra information in PostgreSQL logs.

log_checkpoints = on
log_connections = on
log_disconnections = on
log_lock_waits = on
log_temp_files = 10240
log_autovacuum_min_duration = 1000
log_min_duration_statement = 5000 [Milli seconds. Modify as per your environment]
pgaudit.log =’ddl,role’
shared_preload_libraries =‘auto_explain,pg_stat_statements,pgaudit’
auto_explain.log_min_duration = 5000 [Milli seconds. Modify as per your environment]
log_duration = off

Typical PostgreSQL logs look like below.

2017-07-30 06:53:50 UTC:10.0.1.2(7969):dba_user@postgres:[8611]:LOG: LOG: duration: 65311.253 ms statement: update test_lock set b=2 where a=2;

Next step is to analyze these logs using Amazon Athena .Amazon Athena is an AWS service , built on presto, which allows to directly read data from S3 and supports various data formats like CSV, JSON, ORC, Avro, and Parquet. We will use Hive regex serde to split the fields into useful column.

CREATE EXTERNAL TABLE `postgres_logs`(
 `logtime` timestamp COMMENT 'Log timestamp', 
 `tz` string COMMENT 'Log timezone', 
 `client` string COMMENT 'Client IP or hostname', 
 `clientport` int COMMENT 'Client port', 
 `username` string COMMENT 'DB username making connection to database', 
 `dbname` string COMMENT ' database name', 
 `serverport` int COMMENT ' server port', 
 `log_level` string COMMENT ' Indicating log level i.e LOG,ERROR,FATAL,DETAILED', 
 `log_type1` string COMMENT ' Classification of event i.e connection, disconnection , audit', 
 `duration` decimal(38,6) COMMENT ' Applicable for timed queries (ms)', 
 `log_type2` string COMMENT '', 
 `message` varchar(40000) COMMENT ' Postgresql log message')
PARTITIONED BY ( 
 `year` int, 
 `month` int, 
 `day` int, 
 `hour` int)
ROW FORMAT SERDE 
 'org.apache.hadoop.hive.serde2.RegexSerDe' 
WITH SERDEPROPERTIES ( 
 'input.regex'='^(\\d{4}-\\d{2}-\\d{2}\\s\\d{2}:\\d{2}:\\d{2})\\s(\\S+):\\[?(\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}|[\\w\\.-]+)?\\]?\\(?(\\d+)?\\)?:\\[?(\\w+)?\\]?@\\[?(\\w+)?\\]?:\\[?(\\d+)?\\]?:(\\w+)?:\\s*(\\w+):?\\s*(\\d+\\.\\d+)?(?:\\s\\w+)?\\s*(\\w+)?:?(.*)', 
 'timestamp.formats'='yyyy-MM-dd HH:mm:ss ') 
STORED AS INPUTFORMAT 
 'org.apache.hadoop.mapred.TextInputFormat' 
OUTPUTFORMAT 
 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
 's3://<bucket>';

In above ddl, we have made it as partitioned table. This helps to minimise the data access which improves query performance. This also reduces AWS bill 🙂 as athena billing is done on amount of data scanned . To make it work, Log copy lambda function will have to modified to store the data in buckets using timestamp prefix  e.g s3://db-logs/dbname/2018/04/26/4. You can refer to this blog article for Athena Performance tuning techniques for understanding more on this. We can now read logs from s3 bucket and answer lot of questions.e.g

How many DB connections we had in particular hour?

SELECT count(*) FROM postgres_logs where log_type1='connection' and log_type2='authorized' and year=2018 and month=4 and day=26 and hour=8

Queries which ran more than 5 seconds

SELECT * FROM postgres_logs where duration>5000

Any fatal or error messages in logs?

select * from postgres_logs where log_level in ('ERROR','FATAL')

Querying Audit logs

select * from postgres_logs where log_type1 ='AUDIT'

Logs with queries spawning multiple lines will not be shown properly. I searched for fix but came across article which talks about this being Hive Regex Serde limitation.

Currently Hive does not support recognition of embedded newlines in text formatted data, even via its OpenCSV implementation. This is noted at: https://cwiki.apache.org/confluence/display/Hive/CSV+Serde

The reason the RegEx does not work is cause its applied on top of the record reader, which gives the RegEx only a single line input (cause its broken in an upper layer).

Migrating Function based indexes from Oracle to PostgreSQL

In this post, I will share  issue faced while working with function based index in PostgreSQL . I was performing database migration from Oracle to PostgreSQL and used AWS Schema Conversion Tool(SCT) for converting the database schema.AWS Schema conversion tool takes care of automatically converting schema from one database engine to other. You can get more information about it on AWS documentation or  AWS Database Blogs articles

AWS SCT converted a function based index but while executing ddl in PostgreSQL, it failed to execute  with error

 “ERROR: functions in index expression must be marked IMMUTABLE”

-- Oracle Index DDL
 
CREATE UNIQUE INDEX "APP"."STUDENT_IDX1" ON "APP"."STUDENT" (CASE "IS_VALID" WHEN 1 THEN TO_CHAR("STUDENT_ID")||'-'||TO_CHAR("COURSE_ID") ELSE NULL END ) TABLESPACE "USERS" ; 
 
-- PostgreSQL DDL (Recommended by SCT)
 
CREATE UNIQUE INDEX student_idx1 ON student
USING BTREE ((CASE is_valid
    WHEN 1 THEN CONCAT_WS('', student_id::TEXT, '-', course_id::TEXT)
    ELSE NULL
END) ASC);

As per PostgreSQL docs, function could be of 3 types

  • Volatile – It can return different results on successive calls with the same arguments
  • Stable – It is guaranteed to return the same results given the same arguments for all rows within a single statement
  • Immutable – It is guaranteed to return the same results given the same arguments forever

Let’s understand this with example. In below example, we are using current_timestamp function and output would change based on client timezone setting

postgres=> set timezone to 'US/Pacific';
SET
postgres=> select current_timestamp;
              now
-------------------------------
 2018-01-17 20:22:32.023775-08
(1 row)
postgres=> set timezone to 'UTC';
SET
postgres=> select current_timestamp;
              now
-------------------------------
 2018-01-18 04:22:46.227855+00
(1 row)

Current_timestamp is tagged as STABLE, since their values do not change within a transaction but it will change in next transaction even though input is same.

But Immutable function won’t change result. e.g Below two sql will give same result

select * from emp where empid=4 

OR

select * from emp where empid = 2+2

In PostgreSQL, function/expression used to create function based index needs to be immutable i.e function is guaranteed to return same results on giving same arguments to avoid data corruption.

We can identify whether function is immutable/stable by quering pg_proc view . Value of provalite field will indicate the type

SELECT proname, provolatile, proargtypes, proargtypes[0]::regtype AS argtype, prosrc FROM pg_proc WHERE proname like 'concat%';
proname  | provolatile | proargtypes | argtype |     prosrc
-----------+-------------+-------------+---------+----------------
concat    | s           | 2276        | "any"   | text_concat
concat_ws | s           | 25 2276     | text    | text_concat_ws
(2 rows)

Definition from docs

provolatile tells whether the function’s result depends only on its input arguments, or is affected by outside factors. It is i for “immutable” functions, which always deliver the same result for the same inputs. It is s for “stable” functions, whose results (for fixed inputs) do not change within a scan. It is v for “volatile” functions, whose results might change at any time. (Use v also for functions with side-effects, so that calls to them cannot get optimized away.)

We can see that concat and concat_ws are listed as stable function and their value can depend on client setting. E.g setting parameters like extra_float_digits will impact the output of concat function

postgres=> select concat(1.0000000000003::float8,56);
      concat
-----------------
1.000000000000356
(1 row)
postgres=> set extra_float_digits TO 3;
SET
postgres=> select concat(1.0000000000003::float8,56);
     concat
-----------------------
1.0000000000002999856
(1 row)

We were able to workaround by creating new function with immutable type which accepts input as text

CREATE OR REPLACE FUNCTION immutable_concat ( text, text, text, text ) RETURNS text AS 'SELECT $1 || $2 || $3 || $4;' LANGUAGE SQL IMMUTABLE;

New index statement

CREATE UNIQUE INDEX student_idx1
ON student
USING BTREE ((CASE is_valid
    WHEN 1 THEN immutable_concat('', student_id::TEXT, '-', course_id::TEXT)
    ELSE NULL
END) ASC);

Update WordPress Installations to >4.7.2

This post is applicable for hosted wordpress installations where auto-updates are disabled.

Yesterday, I noticed there was blog post “Hacked by Unknown” on Askdba blog.

Post was written by White Hat Hacker who exploited the Content injection vulnerability in 4.7.0 and 4.7.1. This vulnerability allows any visitor (unauthorized user) to assume role to edit/create blog posts  Since auto-updates were disabled , security patches had to be applied manually.
I had disabled auto-updates  as it had broken my WordPress installation. But I have enabled it now and would recommend all to ensure that they upgrade their installations manually or enable auto-updates .

In case auto-updates were disabled, you can enable auto-updates by removing following line from wp-config.php

define( ‘WP_AUTO_UPDATE_CORE’, false );

More details regarding this vulnerability can be found  here