Security

Using SSL to securely connect to Amazon Redshift Cluster

Amazon Redshift is fully managed Cloud Datawarehouse from AWS for running analytic workloads. Lot of customers have requirements to encrypt data in transit as part of security guidelines. Redshift provides support for SSL connections to encrypt data and server certificates to validate the server certificate that the client connects to.

Before we proceed how SSL works with Redshift, lets understand why we need SSL. SSL is required to encrypt client/server communications and provides protection against three types of attack:

Eavesdropping – In case of un-encrypted connections, a hacker could use network tools to inspect traffic between client and server and steal data and database credentials. With SSL, all the traffic is encrypted.

Man in the middle (MITM) – In this case a hacker could hack DNS and redirect the connection to a different server than intended. SSL uses certificate verification to prevent this, by authenticating the server to the client.

Impersonation – In this hacker could use database credentials to connect to server and gain access to data. SSL uses client certificates to prevent this, by making sure that only holders of valid certificates can access the server.

For more details, refer to this link .

You can easily enable SSL on Amazon Redshift Cluster by setting require_ssl to True in Redshift parameter group.

In this blog post, I would discuss 3 sslmode settings – require, verify-ca, and verify-full , through use of psql and python.

Setup details

I have a Amazon Redshift cluster with publicly accessible set to “No” and I would be accessing it via my local machine. Since database is in private subnet, I would need to use port forwarding via bastion host. Make sure this bastion host ip is whitelisted in Redshift security group to allow connections

## Add the key in ssh agent
ssh-add <your key>

## Here bastion host ip is 1.2.3.4 and we would like to connect to a redshift cluster in Singapore running on port 5439. We would like to forward traffic on localhost , port 9200 to redshift 
ssh -L 9200:redshift-cluster.xxxxxx.ap-southeast-1.redshift.amazonaws.com:5439 [email protected]

When we enable require_ssl to true, we have instructed Redshift to allow encrypted connections. So if any client tries to connect without SSL, then those connections are rejected. To test this , let’s modify sslmode settings for psql client, by setting PGSSLMODE environment variable.

export PGSSLMODE=disable
psql -h localhost -p 9200 -d dev -U dbmaster
psql: FATAL: no pg_hba.conf entry for host "::ffff:172.31.xx.9", user "dbmaster", database "dev", SSL off

As we can see, all database connections are rejected. Let’s now discuss SSL mode – require, verify-ca, and verify-full

1.  sslmode=require

With sslmode setting of require, we are telling that we need a encrypted connection. If a certificate file is present, then it will also make use of it to validate the server and behavior will be same as verify-ca. But if the certificate file is not present, then it won’t complain (unlike verify-ca) and connect to Redshift cluster.

export PGSSLMODE=require
psql -h localhost -p 9200 -d dev -U dbmaster
psql (11.5, server 8.0.2)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.

We can see that psql has made a SSL connection and is using TLS 1.2.

2. sslmode=verify-ca

If we use, verify-ca, then the server is verified by checking the certificate chain up to the root certificate stored on the client. At this point, we need to also give Redshift certificate which has tobe  be downloaded from download link provided in Amazon Redshift documentation. To demonstrate, that we really need a certificate, let’s try connecting without certificate

export PGSSLMODE=verify-ca 
psql -h localhost -p 9200 -d dev -U dbmaster 
psql: error: could not connect to server: root certificate file "/Users/amit/.postgresql/root.crt" does not exist
Either provide the file or change sslmode to disable server certificate verification.

psql is complaining that it couldn’t find the certificate and we should either provide ssl certificate or change sslmode settings. Let’s download the certificate and store under home directory

mkdir ~/.postgresql 
cd .postgresql

curl -o root.crt https://s3.amazonaws.com/redshift-downloads/redshift-ca-bundle.crt

After downloading certificate , and placing under the desired directory, our connection attempt succeeds

3. sslmode=verify-full

Next, if we want to prevent Man in the Middle Attack (MITM), we need to enable sslmode=verify-full . In this case the server host name provided in psql host argument will be matched against the name stored in the server certificate. If hostname matches, the connection is successful, else it will be rejected.

export PGSSLMODE=verify-full
psql -h localhost -p 8192 -d dev -U awsuser
psql: error: could not connect to server: server certificate for "*.xxxxxx.ap-southeast-1.redshift.amazonaws.com" does not match host name "localhost"

In our test connection  fails, as we are using port forwarding and localhost doesn’t match the redshift hostname pattern

What this means is that if you use any services like route53 to have friendly names, verify-full won’t work as the hostname specified in psql command and host presented in certificate don’t match. If your security team is ok with verify-ca option, then you can revert to that setting, else you will have to get rid of aliases and use actual hostname.

In my case, I can resolve the error by connecting to Redshift Cluster from bastion host (instead of my local host tunnel setup) and using psql command  with actual hostname

psql -h redshift-cluster.xxxxxx.ap-southeast-1.redshift.amazonaws.com -p 5439 -d dev -U dbmaster

SSL Connection using Python

Next, let’s see how you can connect to Redshift cluster using python code. This is useful, if you have Lambda code or other client applications which are written in Python. For this example, we will use PyGreSQL module for connecting to Redshift Cluster.

$ python
Python 3.7.9 (default, Aug 31 2020, 12:42:55)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.

>>> import pgdb
>>> rs_port =5439
>>> rs_user = 'dbmaster'
>>> rs_passwd='****'
>>> rs_dbname = 'dev'
>>> rs_host='redshift-cluster.xxxxx.ap-southeast-1.redshift.amazonaws.com'
>>> rs_port =5439

>>> conn = pgdb.connect(dbname=rs_dbname,host=rs_host,port=rs_port , user=rs_user, password=rs_passwd,sslmode=rs_sslmode)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ec2-user/miniconda3/envs/venv37/lib/python3.7/site-packages/pgdb.py", line 1690, in connect
cnx = _connect(dbname, dbhost, dbport, dbopt, dbuser, dbpasswd)
pg.InternalError: root certificate file "/home/ec2-user/.postgresql/root.crt" does not exist
Either provide the file or change sslmode to disable server certificate verification.

Before running above code, I removed the certificate file to show that pgdb.connect requires SSL certificate. Let’s now add the certificate to non-default location like “/home/ec2-user/root.crt” and use sslrootcert argument to pass the location

>>> rs_cert_path='/home/ec2-user/root.crt'
>>> conn = pgdb.connect(dbname=rs_dbname,host=rs_host,port=rs_port , user=rs_user, password=rs_passwd,sslmode=rs_sslmode,sslrootcert=rs_cert_path)
>>> cursor=conn.cursor()
>>> cursor.execute("select current_database()")
<pgdb.Cursor object at 0x7fa73bfe15d0>
>>> print(cursor.fetchone())
Row(current_database='dev')

As you can see above, after passing the SSL certificate, connection succeeds and we can fetch data from Redshift cluster.

Update WordPress Installations to >4.7.2

This post is applicable for hosted wordpress installations where auto-updates are disabled.

Yesterday, I noticed there was blog post “Hacked by Unknown” on Askdba blog.

Post was written by White Hat Hacker who exploited the Content injection vulnerability in 4.7.0 and 4.7.1. This vulnerability allows any visitor (unauthorized user) to assume role to edit/create blog posts  Since auto-updates were disabled , security patches had to be applied manually.
I had disabled auto-updates  as it had broken my WordPress installation. But I have enabled it now and would recommend all to ensure that they upgrade their installations manually or enable auto-updates .

In case auto-updates were disabled, you can enable auto-updates by removing following line from wp-config.php

define( ‘WP_AUTO_UPDATE_CORE’, false );

More details regarding this vulnerability can be found  here

11g: Multiple failed login attempt can block New Application connections

In Oracle database 11g, a new feature/enhancement has potential of bringing down the application due to multiple sessions hitting database with wrong password.

As per Bug 7715339 – Logon failures causes “row cache lock” waits – Allow disable of logon delay [ID 7715339.8]

In 11g there is an intentional delay between allowing failed logon attempts to retry. After 3 successive failures a sleep delay is introduced starting
at 3 seconds and extending to 10 seconds max. During each delay the user X row cache lock is held in exclusive mode preventing any concurrent logon attempt as user X (and preventing any other operation which would need the row cache lock for user X).

Let’s simulate this using a test case.I am using a 11.2.0.3 database.Open two or more database server  connections

Connection 1

create user amit identified by amit;
grant connect to amit;
alter user amit profile MONITORING_PROFILE;

MONITORING_PROFILE has failed_login_attempts set to unlimited.

Create two scripts

$cat 1.sh
sqlplus -s amit/test123 <<EOF
EOF
$cat lock_db.sh
#!/bin/bash
for i in {1..1000}
do
nohup sh 1.sh &
done

I have set loop to 1000 which you can increase depending on your process limit. This is test database with process parameter set to 150. Execute script

sh lock_db.sh 

Now if you go to Connection 2, you will see following issues

a) Database will not allow fresh connections for short time as you will get ORA-00020: maximum number of processes (150) exceeded errors.
After that database connections will be possible
b) You cannot connect to AMIT user for considerable time even with correct password. All sessions will be waiting on library cache lock

 select event,count(*) from V$session group by event order by 2;

EVENT								   COUNT(*)
---------------------------------------------------------------- ----------
Streams AQ: waiting for time management or cleanup tasks		  1
VKTM Logical Idle Wait							  1
Space Manager: slave idle wait						  1
SQL*Net message from client						  1
ges remote message							  1
ASM background timer							  1
GCR sleep								  1
smon timer								  1
pmon timer								  1
Streams AQ: qmn coordinator idle wait					  1
asynch descriptor resize						  1
Streams AQ: qmn slave idle wait 					  1
PING									  1
class slave wait							  2
gcs remote message							  2
DIAG idle wait								  2
wait for unread message on broadcast channel				  2
rdbms ipc message							 18
library cache lock							101

select
 distinct
   ses.ksusenum sid, ses.ksuseser serial#, ses.ksuudlna username,KSUSEMNM module,
   ob.kglnaown obj_owner, ob.kglnaobj obj_name
   ,lk.kgllkcnt lck_cnt, lk.kgllkmod lock_mode, lk.kgllkreq lock_req
   , w.state, w.event, w.wait_Time, w.seconds_in_Wait
 from
  x$kgllk lk,  x$kglob ob,x$ksuse ses
  , v$session_wait w
where lk.kgllkhdl in
(select kgllkhdl from x$kgllk where kgllkreq >0 )
and ob.kglhdadr = lk.kgllkhdl
and lk.kgllkuse = ses.addr
and w.sid = ses.indx
order by seconds_in_wait desc
/

      SID    SERIAL# USERNAME	 MODULE 			     OBJ_OWNER	     OBJ_NAME		LCK_CNT  LOCK_MODE   LOCK_REQ STATE		  EVENT 			  WAIT_TIME SECONDS_IN_WAIT
---------- ---------- ---------- ----------------------------------- --------------- --------------- ---------- ---------- ---------- ------------------- ------------------------------ ---------- ---------------
       153	   77		 testbox01		     84 		      0 	 0	    3 WAITING		  library cache lock			  0		  3
	36	  101		 testbox01		     84 		      0 	 0	    3 WAITING		  library cache lock			  0		  3
	16	  137		testbox01		     84 		      0 	 0	    3 WAITING		  library cache lock			  0		  3
       143	  203		 testbox01		     84 		      0 	 0	    3 WAITING		  library cache lock			  0		  3
	26	  261		 testbox01		     84 		      1 	 3	    0 WAITED KNOWN TIME   library cache lock			119		  3

You will notice that seconds_in_wait value will increase . Also ‘alter system kill session’  takes long time to kill session (it seems as if hung). As per bug ,any operation involving row cache lock for that user will not be allowed. I confirmed by trying to reset profile to Default and session was stuck.

Till all these sessions are cleared from database, you will not be allowed to login. This can severy impact applications as fresh connection would not be possible. Pre-existing sessions will work
If you repeat same experiment in 10g, you will not face similar issue.

To disable this behavior i.e no sleep delay set following event

event=”28401 trace name context forever, level 1″

You can set it dynamically too using

alter system set events ‘28401 trace name context forever, level 1’;

To go back to original behavior

alter system set events ‘28401 trace name context off’;

Bug 7715339 mentions that if FAILED_LOGIN_ATTEMPTS is set to some value, then further attempts to log in will then correctly fail immediately with no delay

I tested this and found that this is not entirely true as my script was able to again spawn 101 sessions which were waiting on library cache lock again. Fresh connection with correct password was again stuck. DBA_USERS view was showing that account is locked. Though here the session cleared in quick time as compared to test case involving failed_login_attempts to unlimited.

Oracle Critical Patch Update January 2012

I guess most of us having My Oracle Support have got email from Oracle mentioning that Critical Patch Update January 2012 has been released. Main reason behind this is Security bug found by Infoworld which can make database run out of SCN. You can read the article here

Summary

There is bug 12371955 which causes scn to be incremented at very high rate when we issue begin backup command in 11g database. There is already patch available for this.
But Infoworld seems to have found security flaws which can cause scn to be be raised by say 1 trillion on interconnected database. Jan 2012 CPU patch seems to address this issue.

 

Database Security: Transparent Data Encryption

Data in the database is secured by using authentication, authorization and auditing mechanism, but the information stored in the datafiles can be read using any hex-editor tools or other OS level utilities, which is causing threat to sensitive  data. To overcome this limitation TDE is used to secure sensitive data, in database columns, stored in the datafiles. TDE is key-based access control mechanism. For each table, having encrypted column in it, an encryption key is generated which is again encrypted by database server’s master key. The database server’s master key is generated at the time when Wallet is ceated. The master key is stored outside oracle database in a Wallet and the keys for all tables containing encrypted columns are stored in dictionary tables in database.

How to setup TDE:

To use TDE one must have “ALTER SYSTEM” privilege and a valid password for oracle Wallet. 1. Make sure that listener and database is up and running and compatibility level of database is set to 10.2 . Enabling Transparent Data Encryption. Issue following command:

<span style="font-size: small;">S</span>QL &gt; alter system set encryption key identified by "password";

If the password is not given in “” then it will be taken in capital letters. This command will do the following things: – Create a Wallet file (ewallet.p12) – Open the Wallet. – Generate databse server’s master encryption key. By default, above command will create ewallet.p12 file under location $ORACLE_BASE/admin/SID/WALLET directory/folder. So it is mandatory to create a directory/folder with name as “WALLET” under $ORACLE_BASE/admin/SID, if it is not created then following error will occur while creating  Wallet file:

SQL> alter system set encryption key identified by "oracle";
*
ERROR at line 1:
ORA-28368: cannot auto-create wallet

The default location is helpful if someone wants different Wallets for different instances running under same ORACLE_HOME. We can manually specify location of Wallet file by modifying $ORACLE_HOME/network/admin/sqlnet.ora file as:

ENCRYPTION_WALLET_LOCATION=
(SOURCE=(METHOD=FILE)(METHOD_DATA=
(DIRECTORY=C:\Oracle\product\10.2.0\db_1\wallet)))

Any valid existing directory can be specified here.

Why Wallet is needed?

The encryption keys are stored in the wallet.Database needs to load master key and column encryption keys into memory from the Wallet before it can start encryption/decryption of columns. Wallet remains open until we shutdown the database. It will be closed after the database is shutdown and needs to be started again after database startup as:

<span style="font-size: small;"><span style="font-family: arial,helvetica,sans-serif;">SQL &gt; alter system set encryption wallet open identified by "password";</span></span>

If the wallet remains closed, we will not be able to access the table which contains encrypted columns and will get following error:

ORA-28365: wallet is not open

— Now TDE can be used for this database.

Example:

SQL > conn scott/tiger

SQL > SQL> l
1  create table sood(
2  first_name varchar2(10),
3  last_name varchar2(10),
4  empid number,
5* salary number ENCRYPT);

SQL> create user saurabh identified by oracle;

User created.

SQL> grant create session to saurabh;

Grant succeeded.

Insert data into scott.sood as:

SQL> insert into sood values ('jonh','wallter',1,30000);

1 row created.

SQL> insert into sood values ('Michael','Clark',2,20000);

1 row created.

SQL> insert into sood values ('Ricky','Ponting',3,10000);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from sood;

FIRST_NAME LAST_NAME       EMPID     SALARY
---------- ---------- ---------- ----------
jonh       wallter             1      30000
Michael    Clark               2      20000
Ricky      Ponting             3      10000

SQL> show user

USER is “SCOTT”

SQL> grant select on sood to saurabh;

Grant succeeded.

As user saurabh has select privilige on sood(table containing encrypted column), it will be able to retrive (decrypt) data without performing any additional step.

SQL> conn saurabh/oracle

Connected.

SQL> select * from scott.sood;

FIRST_NAME LAST_NAME       EMPID     SALARY
---------- ---------- ---------- ----------
jonh       wallter             1      30000
Michael    Clark               2      20000
Ricky      Ponting             3      10000

If we mine the redo logs, the logminer will show Salary column as “Unsupported Type” under SQL_REDO, because the data was encrypted just before it was written to disk.

Export/Import and TDE:

1. exp/imp will not work with tables having encrypted columns, following error will be given:

<span style="font-size: small;">-</span>-----------------------------------------------------------
C:\Documents and Settings\sausood&gt;exp scott/tiger tables=sood
Export: Release 10.2.0.1.0 - Production on Fri Dec 19 13:05:55 2008

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set

About to export specified tables via Conventional Path ...
EXP-00107: Feature (COLUMN ENCRYPTION) of column SALARY in table SCOTT.SOOD is not supported. The table will not be exported.
Export terminated successfully with warnings.
--------------------------------------------------------------

TDE is only compatible with DATAPUMP export/import.

2. Backing up tables containing encrypted columns:

SQL> select * from DBA_ENCRYPTED_COLUMNS;

OWNER  TABLE_NAM COLUMN_NAM ENCRYPTION_ALG                SAL

—— ——— ———- —————————– —

SCOTT  SOOD      SALARY     AES 192 bits key              YES

Use the following command to export TDE enabled tables.

$ expdp scott/tiger directory=dir dumpfile=exptde.dmp tables=sood <strong>encryption_password</strong>="oracle"

Parameter ENCRYPTION_PASSWORD allows encrypted columns to be encrypted while exporting these.The password has nothing to deal with encryption keys, it is to used while importing the tables. If the ENCRYPTION_PASSWORD is not used the the data is exported in clear text and a warning message is displayed at the time of datapump export:

"<strong>ORA-39173</strong>: Encrypted data has been stored unencrypted in dump file set. "

While importing it is mandatory to have the same wallet file for database, otherwise following error will occur:

"<strong>ORA-28362</strong>: master key not found"

Transparent Data Encryption FAQ

How To Change/Restore User Password in 11G

Oracle 11g introduces Case-sensitive passwords for database authentication. Along with this if you wish to change the password (temporarily) and reset it back to old , you will find that password field in dba_users is empty. Prior to 11g we could use following technique to change/restore password

SQL> create user amit identified by amit;

User created.

SQL> grant create session to amit;

Grant succeeded.

SQL> conn sys as sysdba
Enter password:
Connected.
SQL> select username,password from dba_users where username='AMIT';

USERNAME                       PASSWORD
------------------------------ ------------------------------
AMIT                           9DEC0D889E8E9A6B

SQL> alter user amit identified by abc;

User altered.

SQL> conn amit/abc
Connected.
SQL> conn sys as sysdba
Enter password:
Connected.
SQL> alter user amit identified by values '9DEC0D889E8E9A6B';

User altered.

SQL> conn amit/amit
Connected.

In 11g if you query password field, it will return NULL.

SQL> select username,password from dba_users where username='AMIT';

USERNAME                       PASSWORD
------------------------------ ------------------------------
AMIT

Let’s first see Case-sensitive password feature in 11g and then steps to change/restore passwords

SQL> create user amit identified by AMIT;

User created.

SQL> grant connect,resource to amit;

Grant succeeded.

SQL> conn amit/amit
ERROR:
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.
SQL> conn amit/AMIT
Connected.

This behavior is controlled by “sec_case_sensitive_logon” initialization paramter. If the value is true then it will enforce case sensitive passwords

SQL> select NAME,VALUE from V$SPPARAMETER where NAME='sec_case_sensitive_logon';

NAME                                     VALUE
---------------------------------------- --------------------
sec_case_sensitive_logon                 TRUE

SQL> conn / as sysdba
Connected.
SQL> alter system set sec_case_sensitive_logon=false;

System altered.

SQL> conn amit/amit
Connected.
SQL> conn / as sysdba
Connected.
SQL> alter system set sec_case_sensitive_logon=true;

System altered.

SQL> conn amit/amit
ERROR:
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.
SQL> conn amit/AMIT
Connected.

Now to reset the password in 11g, we need to query spare4 column in user$ table

SQL> select spare4 from user$ where name='AMIT';

SPARE4
--------------------------------------------------------------------------------
S:2D058976AE8FAD8ECFCDB93835ACEE94C83EDE19169209155BB81FEE7DBB

SQL> alter user amit identified by abc12;

User altered.

SQL> conn amit/abc12
Connected.
SQL> conn / as sysdba
Connected.
SQL> alter user amit identified by values 'S:2D058976AE8FAD8ECFCDB93835ACEE94C83EDE19169209155BB81FEE7DBB';

User altered.

SQL> conn amit/abc12
ERROR:
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.
SQL> conn amit/AMIT
Connected.

As per Metalink Note 429465.1 , view DBA_USERS has new column PASSWORD_VERSIONS rendered as follows:

decode(length(u.password),16,'10G ',NULL)||NVL2(u.spare4, '11G ' ,NULL)
for example:

SQL> select USERNAME, PASSWORD_VERSIONS from dba_users where rownum <5;

USERNAME                       PASSWORD
------------------------------ --------
SYS                            10G 11G
SYSTEM                         10G 11G
OUTLN                          10G 11G
DIP                            10G 11G

In this case it means both old and new-style hash values are available for the users, the new hash value is stored in the USER$.SPARE4 column, as long as this remains NULL it means the password has not been changed since the migration and the user will have the old case insensitive password.

SQL> create user test identified by test;

User created.

SQL> select USERNAME, PASSWORD_VERSIONS from dba_users where username in (‘AMIT’,’TEST’);

USERNAME                       PASSWORD
—————————— ——–
AMIT                           11G
TEST                           10G 11G

As I had reset password using only spare4 string, password will be case -sensitive irrespective of setting for sec_case_sensitive_logon parameter value. i.e why we see value of “11G”  for user Amit.

Update

When resetting the password, we need to also query password column from user$ column if we wish to use case-insensitive feature in future. i.e In my above example I used only spare4 column value to reset the password. Now if I set sec_case_sensitive_logon=false , I will not be able to connect.

SQL> alter system set sec_case_sensitive_logon=false;

System altered.

SQL> conn amit/amit
ERROR:
ORA-01017: invalid username/password; logon denied

In case we wish to use both, we need to set identified by values ‘S:spare4;password’. As I didnot use password field while resetting, I find that password field in user$ is empty. To correct it, I had to change the password again.

SQL> select password,spare4 from user$ where name='AMIT';

PASSWORD                       SPARE4
------------------------------ ----------------------------------------------------------------------
                               S:2D058976AE8FAD8ECFCDB93835ACEE94C83EDE19169209155BB81FEE7DBB

SQL>  alter system set sec_case_sensitive_logon=true;

System altered.

SQL> alter user amit identified by AMIT;

User altered.

SQL> select password,spare4 from user$ where name='AMIT';

PASSWORD                       SPARE4
------------------------------ ----------------------------------------------------------------------
9DEC0D889E8E9A6B               S:F5DEBF680433864AA5A744C2368D8677A120939D083D74A2F6E99C5952AE

So to reset the password, following needs to be used.

SQL> select password,spare4 from user$ where name='AMIT';

PASSWORD                       SPARE4
------------------------------ ----------------------------------------------------------------------
9DEC0D889E8E9A6B               S:F5DEBF680433864AA5A744C2368D8677A120939D083D74A2F6E99C5952AE

SQL> alter user amit identified by values 'S:F5DEBF680433864AA5A744C2368D8677A120939D083D74A2F6E99C5952AE;9DEC0D889E8E9A6B';

User altered.

Thanks to Laurent for pointing this. You can see his article for more information.You can use below code to get the password script

select 'alter user '||name||' identified by values '''||password||''';' from user$ where spare4 is null and password is not null
union
select 'alter user '||name||' identified by values '''||spare4||';'||password||''';' from user$ where spare4 is not null and password is not null;