12c

How To Configure Exadata Database Machine in Enterprise Manager Cloud Control 13c (OEM13c)

I have followed the steps in Oracle Documentation link: https://docs.oracle.com/cd/E63000_01/EMXIG/ch2_deployment.htm#EMXIG215 to configure Exadata Database Machine in OEM13c. If you want to configure your Exadata in OEM13c you have to follow the above mentioned link.
In this post I will share the mandatory steps for configuration, and some of the issues which I faced while configuring the Exadata on OEM13c.
NOTE: OEM13c agents only needs to be deployed on compute nodes on your Exadata Machine.

Step 1: Deploy Exadata Plug-In in OEM13c.

Step 2: For an EM agent to communicate with ILOM SP, there must be a user created on ILOM SP, on all the Compute Nodes.
Create A Database Server ILOM SP (Service Processor) User.
Login to Compute Node ILOM with “root” user”
# cd /SP/users
# create oemuser
Creating user…
Enter new password: ********
Enter new password again: ********

Created /SP/users/oemuser

Change to the new user’s directory and set the role:

# cd oemuser
/SP/users/oemuser

set role=’cro’
Set ‘role’ to ‘cro’

Now test the ILOM user ID created:

For Exadata X5-2:
# ipmitool -I lanplus -H <ComputeNodeILOMHostname> -U oemuser -P xxxxxx -L USER sel list last 10
It should display some results.

Now run the above steps on all Compute Node ILOMs.

STEP 3: Push the OEM agent to Compute nodes.
From OEM13c console, select Setup from top right corner, and then Add Target, and the Add Target Manually. Put the Compute Node’s hostname, and then select your OS version. Fill-in the rest of the details on the screen and click Deploy.  It will Deploy the agent on the Compute Nodes you have mentioned.

 

Step 4: Run discovery Precheck Script:
To ensure that discovery of Exadata Machine complete without any issues, you need to run exadataDiscoveryPreCheck.pl. This script is available under OEM13c OMS server Exadata plug-in location i.e:
<OMS_agent installation directory>/plugins/oracle.sysman.xa.discovery.plugin_12.1.0.3.0/discover/dbmPreReqCheck/exadataDiscoveryPreCheck.pl, verify the path as per your configuration and run the script. You can also download the script from MOS Note: 1473912.1.

NOTE: For Infiniband user you have to use “nm2user” and its default password is changeme.

This script showed following errors to me:
Verifying setup files consistency... 
------------------------------------ 
Verifying cell nodes... 
Cell node <CellNode Name> is missing in one of the setup files. 
Cell node <CellNode Name>.domain is missing in one of the setup files. 
Cell node <CellNode Name>.domain is missing in one of the setup files. 
Cell node <CellNode Name> is missing in one of the setup files. 
Cell node <CellNode Name> is missing in one of the setup files. 
Cell node <CellNode Name>.domain is missing in one of the setup files. 
Verifying infiniband nodes... 
Infiniband node <IBNode Name>.domain is missing in one of the setup files. 
Infiniband node <IBNode Name> is missing in one of the setup files. 
Infiniband node <IBNode Name>.domain is missing in one of the setup files. 
Infiniband node <IBNode Name> is missing in one of the setup files. 
Infiniband node null is missing in one of the setup files. 
Verifying KVM nodes... 
KVM node null is missing in one of the setup files. 
Verifying PDU nodes... 
PDU node <PDUNode Name> is missing in one of the setup files. 
PDU node <PDUNode Name> is missing in one of the setup files. 
PDU node <PDUNode Name>.domain is missing in one of the setup files. 
PDU node <PDUNode Name>.domain is missing in one of the setup files. 
Setup files are not consistent ===> Not ok 
* Please make sure that node information in both parameter and schematic files 
is consistent. 
======================================================= 
* Please make sure ciphers are correctly set in all cell and compute nodes. 
Verifying SSH cipher definition for <CellNode Name> cell node... 
None of the expected ciphers were found in sshd_config file ===> Not ok 
* Please make sure ciphers are correctly set in sshd_config file. 
== =========================================================

 So there were two issues:
1. Parameter file and Schematic file were not in sync with each other.
2. Missing valid cipher in cellnodes’ sshd_config file.
For parameter files issue, we need to check two files under /opt/oracle.SupportTools/onecommand, em.params and databasemachine.xml, and have to make sure that entries are same
in these files. In my case all the names under em.params were with fqdn and under databasemachine.xml these were without fqdn. I modified em.params to remove the fqdn from
all names.
For cipher issue, as the compute nodes did not error out for valid ciphers, I have copied one cipher entry from Compute Node to all the Cell Nodes and restarted the sshd service.

After making these two changes I ran exadataDiscoveryPreCheck.pl script again and it came out clean.

STEP 5: Discovering an Exadata Database Machine

1. From the Enterprise Manager home page, select the Setup menu (upper right corner), Add Target, and then Add Targets Manually.

2. On the Add Targets Manually page, click Add Targets Using Guided Process. From Add Using Guided Process window, select Oracle Exadata Database Machine from the list and click Add.

3. On the Oracle Exadata Database Machine Discovery page, select one of the following tasks:
13c target type
12c target type
I opted for 13c target type.

4. On the Discovery Inputs page, enter the following information
For the Discovery Agents section:
Agent URL: The Agent deployed on compute node. Click the search icon to select from available URLs.
For the Schematic Files section:
Once you have specified the Agent URL, a new row (hostname and schematic file information) is automatically added. The default schematic file, databasemachine.xml, describes the hardware components of the Exadata Database Machine.
Click Set Credential to set the credentials for the host.
Check/modify the schematic file location.
Select the schematic file name from drop-down menu.

5. On the InfiniBand Discovery page, enter the following information:
IB Switch Host Name: The InfiniBand switch host name. The IB Switch host name is usually pre-populated.
InfiniBand Switch ILOM host credential: The user name (usually ilom-admin or ilom-operator) and password for the InfiniBand switch ILOM host.

Rest of the steps are self explanatory and can be filled easily.

On Credentials page, after filling root password, you will get two options under SNMP credentials:
— Credential Type SNMPV1
— Credential Type SNMPV3
I opted for SNMPV3 and it requires EXACLI username/password. So you have to create ExaCli user as described at
http://docs.oracle.com/cd/E50790_01/doc/doc.121/e50471.pdf on Page 384.
Create the Exacli user and provide the information asked under SNMPV3.

Click Submit and it will take some time to discover the Exadata DB Machine.

After this, you can see “Exadata” under Targets tab on OEM13c home page.

 

12.1.0.2 PDB fails to come out of restricted mode

This one is a nasty bug 🙂 I was trying to setup Oracle PDB in a test environment for the first time and got stuck with ORA-01035 error

[oracle@oracle11g ~]$ sqlplus hr/hr@//oracle11g:1522/engg

SQL*Plus: Release 12.1.0.2.0 Production on Fri Jul 3 07:34:58 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

ERROR:
ORA-01035: ORACLE only available to users with RESTRICTED SESSION privilege

Checking the status of this pluggable database, I could see that DB was open but it was in restricted mode

SQL>  select con_id,logins,status from v$instance;

    CON_ID LOGINS     STATUS
---------- ---------- ------------
         0 RESTRICTED OPEN

SQL> select con_id,name,open_mode from v$pdbs;

    CON_ID NAME                           OPEN_MODE
---------- ------------------------------ ----------
         2 PDB$SEED                       READ ONLY
         3 ENGG                           READ WRITE

--Had got following error while opening/creating PDB

SQL> alter pluggable database engg open;

Warning: PDB altered with errors.

I tried creating PDB manually / DBCA multiple times but all PDB remained in restricted state. Alert log also didn’t report any error or explanation for this errors. Tried searching on internet but didn’t find any relevant hit. Finally MOS pointed out (Bug 19174942) that this could happen in 12.1.0.2 ,if a common user has default tablespace which is not present in PDB. I knew this had to be my issue as I had created a common user with default tablespace users but PDB’s didn’t had this tablespace. Fix was to create this tablespace in all the PDB’s and restart the PDB

SQL> alter session set container=engg;

Session altered.

SQL> create tablespace users datafile '/oracle/oradata1/orcl12c/pdbseed/user_01.dbf' size 100m ;

Tablespace created.

SQL> alter pluggable database engg close;

Pluggable database altered.

SQL> alter pluggable database engg open;

Pluggable database altered.

SQL>  select instance_name,status,logins from v$instance;

INSTANCE_NAME    STATUS       LOGINS
---------------- ------------ ----------
orcl12c          OPEN         ALLOWED

 

LGWR terminating instance due to error 338

Recently we came across a issue where our DB crashed with ORA-00338 error .

Errors in file /oracle/diag/rdbms/orcl11g/orc11g/trace/orc11g_lgwr_24118.trc:
ORA-00338: log 2 of thread 1 is more recent than control file
ORA-00312: online log 2 thread 1: '/oracle/oradata/orcl11g/redo02.log'
LGWR (ospid: 24118): terminating the instance due to error 338

DB couldn’t be restarted as it gave same errors while opening. For multiplexed redo log files, it reports error for both log files.

Error Description: (Reference :http://psoug.org/oraerror/ORA-00338.htm)
Log string of thread string is more recent than control file

Error Cause:

The control file change sequence number in the log file is greater than the number in the control file. This implies that the wrong control file is being used. Note that repeatedly causing this error can make it stop happening without correcting the real problem. Every attempt to open the database will advance the control file change sequence number until it is great enough.

Action:
Use the current control file or do backup control file recovery to make the control file current. Be sure to follow all restrictions on doing a backup control file recovery.

Above explanation suggests that there is problem with controlfile. Normally these errors are seen when doing incomplete recovery. To troubleshoot it ,we took dumps of redo log files (On recommendation of Oracle support)

Note: Below logs are from test system which was used to reproduce the issue

SQL> alter system dump logfile '/oracle/oradata/orcl11g/redo01.log' validate;
System altered.
SQL> alter system dump logfile '/oracle/oradata/orcl11g/redo02.log' validate;
ERROR at line 1:
ORA-00339: archived log does not contain any redo
ORA-00334: archived log: '/oracle/oradata/orcl11g/redo02.log'
SQL> alter system dump logfile '/oracle/oradata/orcl11g/redo03.log' validate;
*
ERROR at line 1:
ORA-00339: archived log does not contain any redo
ORA-00334: archived log: '/oracle/oradata/orcl11g/redo03.log'

redo02.log and redo03.log dump failed with errors that it does not contain redo. Since redo01.log dump was successful, we looked at trace.

DUMP OF REDO FROM FILE '/oracle/oradata/orcl11g/redo01.log'
 Opcodes *.*
 RBAs: 0x000000.00000000.0000 thru 0xffffffff.ffffffff.ffff
 SCNs: scn: 0x0000.00000000 thru scn: 0xffff.ffffffff
 Times: creation thru eternity
 VALIDATE ONLY
 FILE HEADER:
 Compatibility Vsn = 186647552=0xb200400
 Db ID=970369526=0x39d6a9f6, Db Name='TESTDB'
 Activation ID=2650290266=0x9df8385a
 Control Seq=5124=0x1404, File size=102400=0x19000
 File Number=1, Blksiz=512, File Type=2 LOG
 descrip:"Thread 0001, Seq# 0000000001, SCN 0x00000016f528-0xffffffffffff"
 thread: 1 nab: 0xffffffff seq: 0x00000001 hws: 0x3 eot: 1 dis: 0
 resetlogs count: 0x33acc28a scn: 0x0000.0016f528 (1504552)
 prev resetlogs count: 0x3377bd37 scn: 0x0000.000e2006 (925702)
 Low scn: 0x0000.0016f528 (1504552) 12/22/2014 06:13:30
 Next scn: 0xffff.ffffffff 01/01/1988 00:00:00
 Enabled scn: 0x0000.0016f528 (1504552) 12/22/2014 06:13:30
 Thread closed scn: 0x0000.0016f528 (1504552) 12/22/2014 06:13:30
 Disk cksum: 0xcec6 Calc cksum: 0xcec6
 Terminal recovery stop scn: 0x0000.00000000
 Terminal recovery 01/01/1988 00:00:00
 Most recent redo scn: 0x0000.00000000
 Largest LWN: 0 blocks
 End-of-redo stream : No
 Unprotected mode
 Miscellaneous flags: 0x800000
 Thread internal enable indicator: thr: 0, seq: 0 scn: 0x0000.00000000
 Zero blocks: 0
 Format ID is 2
 redo log key is 1679de3ad36cdd2684143daaa1635b8
 redo log key flag is 5
 Enabled redo threads: 1
END OF REDO DUMP
----- Redo read statistics for thread 1 -----

If you look at dumpfile it says that sequence is set to 1 and DB name=”TESTDB’ .Our instance name is orc11g and last sequence# was more than 1 (can also be confirmed from v$log). This indicated that our redo logs were overwritten by some other process. DB name in redo log hinted that it was done by reporting clone refresh process. Problem was that redo log volume was cross mounted on a reporting clone which overwrote the redo logs during the refresh process .

As a fix, new volume was provisioned for the clone redo logs and volume export was revoked. Since current redo log also got overwritten, we had to restore last hot backup and perform incomplete recovery till the last ETL start time (This was a datawarehouse db). In OLTP this would have caused data loss.

This issue is easily reproducible. If you try to clone a instance by using same name for redo log and open the database, you will receive the error in your source database. DB files are protected by DBWR so we are protected from risk of other Database opening datafiles and get ORA-01157

ORA-01157: cannot identify/lock data file 3 - see DBWR trace file
ORA-01110: data file 3: '/oracle/oradata/orcl11g/undotbs01.dbf'

MGMTDB: Grid Infrastructure Management Repository

MGMTDB is new database instance which is used for storing Cluster Health Monitor (CHM) data. In 11g this was being stored in berkley database but starting Oracle database 12c it is configured as  Oracle Database Instance.
In 11g, .bdb files were stored in $GRID_HOME/crf/db/hostname and used to take up lot of space (>100G) due to bug in 11.2.0.2

During 12c Grid infrastructure installation, there is option to configure Grid Infrastructure Management Repository.

grid_management_db

If you choose YES, then you will see instance -MGMTDB running on one of the node on your cluster.

[oracle@oradbdev02]~% ps -ef|grep mdb_pmon
oracle    7580     1  0 04:57 ?        00:00:00 mdb_pmon_-MGMTDB

This is a Oracle single instance which is being managed by Grid Infrastructure and fails over to surviving node if existing node crashes.You can identify the current master using below command

-bash-4.1$ oclumon manage -get MASTER

Master = oradbdev02

This DB instance can be managed using srvctl commands. Current master can also be identified using status command

$srvctl status mgmtdb 
Database is enabled
Instance -MGMTDB is running on node oradbdev02

We can look at mgmtdb config using

$srvctl config mgmtdb
Database unique name: _mgmtdb
Database name: 
Oracle home: /home/oragrid
Oracle user: oracle
Spfile: +VDISK/_mgmtdb/spfile-MGMTDB.ora
Password file: 
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Database instance: -MGMTDB
Type: Management

Replace config with start/stop to start/stop database.
Databases files for repository database are stored in same location as OCR/Voting disk

SQL> select file_name from dba_data_files union select member file_name from V$logfile;

FILE_NAME
------------------------------------------------------------
+VDISK/_MGMTDB/DATAFILE/sysaux.258.819384615
+VDISK/_MGMTDB/DATAFILE/sysgridhomedata.261.819384761
+VDISK/_MGMTDB/DATAFILE/sysmgmtdata.260.819384687
+VDISK/_MGMTDB/DATAFILE/system.259.819384641
+VDISK/_MGMTDB/DATAFILE/undotbs1.257.819384613
+VDISK/_MGMTDB/ONLINELOG/group_1.263.819384803
+VDISK/_MGMTDB/ONLINELOG/group_2.264.819384805
+VDISK/_MGMTDB/ONLINELOG/group_3.265.819384807

We can verify the same using oclumon command

-bash-4.1$ oclumon manage -get reppath

CHM Repository Path = +VDISK/_MGMTDB/DATAFILE/sysmgmtdata.260.819384687

Since this is stored at same location as Voting disk, if you have opted for configuring Management database, you will need to use voting disk with size >5G (3.2G+ is being used by MGMTDB). During GI Installation ,I had tried adding voting disk of 2G but it failed saying that it is of insufficient size. Error didnot indicate that its needed for Management repository but now I think this is because of repository sharing same location as OCR/Voting disk.
Default (also Minimum) size for CHM repository is 2048 M . We can increase respository size by issuing following command

-bash-4.1$ oclumon manage -repos changerepossize 4000
The Cluster Health Monitor repository was successfully resized.The new retention is 266160 seconds.

This command internally runs resize command on datafile and we can see that it changed datafile size from 2G to 4G

SQL> select file_name,bytes/1024/1024,maxbytes/1024/1024,autoextensible from dba_data_files;

FILE_NAME					   BYTES/1024/1024 MAXBYTES/1024/1024 AUT
-------------------------------------------------- --------------- ------------------ ---
+VDISK/_MGMTDB/DATAFILE/sysmgmtdata.260.819384687	      4000		    0 NO

If we try to reduce the size from 4Gb to 3Gb, it will warn and upon user confirmation drop all repository data

-bash-4.1$ oclumon manage -repos changerepossize 3000
Warning: Entire data in Cluster Health Monitor repository will be deleted.Do you want to continue(Yes/No)?
Yes
The Cluster Health Monitor repository was successfully resized.The new retention is 199620 seconds.

Tracefiles for db are stored under DIAG_HOME/_mgmtdb/-MGMTDB/trace. Alert log for instance can be found at this location. Since file name start with -MGMTDB*, we need to use ./ to access files. e.g

[oracle@oradbdev02]~/diag/rdbms/_mgmtdb/-MGMTDB/trace% vi -MGMTDB_mmon_7670.trc
VIM - Vi IMproved 7.2 (2008 Aug 9, compiled Feb 17 2012 10:23:31)
Unknown option argument: "-MGMTDB_mmon_7670.trc"
More info with: "vim -h"
[oracle@oradbdev02]~/diag/rdbms/_mgmtdb/-MGMTDB/trace% vi ./-MGMTDB_mmon_7670.trc

Sample output from a 3 node RAC setup

[oracle@oradbdev02]~% oclumon dumpnodeview -allnodes

----------------------------------------
Node: oradbdev02 Clock: '13-07-23 07.19.00' SerialNo:1707 
----------------------------------------

SYSTEM:
#pcpus: 4 #vcpus: 4 cpuht: N chipname: Dual-Core cpu: 5.15 cpuq: 1 physmemfree: 469504 physmemtotal: 7928104 mcache: 5196464 swapfree: 8191992 swaptotal: 8191992 hugepagetotal: 0 hugepagefree: 0 hugepagesize: 2048 ior: 0 iow: 51 ios: 9 swpin: 0 swpout: 0 pgin: 134 pgout: 140 netr: 223.768 netw: 176.523 procs: 461 rtprocs: 25 #fds: 24704 #sysfdlimit: 779448 #disks: 6 #nics: 3 nicErrors: 0

TOP CONSUMERS:
topcpu: 'oraagent.bin(7090) 2.59' topprivmem: 'java(7247) 149464' topshm: 'ora_mman_snowy1(7783) 380608' topfd: 'ocssd.bin(6249) 273' topthread: 'crsd.bin(6969) 42' 

----------------------------------------
Node: oradbdev03 Clock: '13-07-23 07.19.02' SerialNo:47 
----------------------------------------

SYSTEM:
#pcpus: 4 #vcpus: 4 cpuht: N chipname: Dual-Core cpu: 3.65 cpuq: 2 physmemfree: 1924468 physmemtotal: 7928104 mcache: 4529232 swapfree: 8191992 swaptotal: 8191992 hugepagetotal: 0 hugepagefree: 0 hugepagesize: 2048 ior: 1 iow: 83 ios: 17 swpin: 0 swpout: 0 pgin: 45 pgout: 55 netr: 67.086 netw: 55.042 procs: 373 rtprocs: 22 #fds: 21280 #sysfdlimit: 779448 #disks: 6 #nics: 3 nicErrors: 0

TOP CONSUMERS:
topcpu: 'osysmond.bin(19281) 1.99' topprivmem: 'ocssd.bin(19323) 83528' topshm: 'ora_mman_snowy2(20306) 261508' topfd: 'ocssd.bin(19323) 249' topthread: 'crsd.bin(19617) 40' 

----------------------------------------
Node: oradbdev04 Clock: '13-07-23 07.18.58' SerialNo:1520 
----------------------------------------

SYSTEM:
#pcpus: 4 #vcpus: 4 cpuht: N chipname: Dual-Core cpu: 3.15 cpuq: 1 physmemfree: 1982828 physmemtotal: 7928104 mcache: 4390440 swapfree: 8191992 swaptotal: 8191992 hugepagetotal: 0 hugepagefree: 0 hugepagesize: 2048 ior: 0 iow: 25 ios: 4 swpin: 0 swpout: 0 pgin: 57 pgout: 27 netr: 81.148 netw: 41.761 procs: 355 rtprocs: 24 #fds: 20064 #sysfdlimit: 779450 #disks: 6 #nics: 3 nicErrors: 0

TOP CONSUMERS:
topcpu: 'ocssd.bin(6745) 2.00' topprivmem: 'ocssd.bin(6745) 83408' topshm: 'ora_mman_snowy3(8168) 381768' topfd: 'ocssd.bin(6745) 247' topthread: 'crsd.bin(7202) 40'

You can learn more about oclumon usage by referring to Oclumon Command Reference

I faced error in my setup where I was getting ora-28000 error while using oclumon command. I tried unlocking account and it didn’t succeed.

oclumon dumpnodeview

dumpnodeview: Node name not given. Querying for the local host
CRS-9118-Grid Infrastructure Management Repository connection error 
 ORA-28000: the account is locked

SQL> alter user chm account unlock;

User altered.

dumpnodeview: Node name not given. Querying for the local host
CRS-9118-Grid Infrastructure Management Repository connection error 
 ORA-01017: invalid username/password; logon denied

This issue occurred as post configuration tasks had failed during GI installation. Solution is to run mgmtca from grid home which fixed the issue by unlocking and setting password for users. Wallet was configured for oclumon to be able to access the repository without hard coding password.

[main] [ 2013-07-23 05:32:41.619 UTC ] [Mgmtca.main:102]  Running mgmtca
[main] [ 2013-07-23 05:32:41.651 UTC ] [Mgmtca.execute:192]  Adding internal user1
[main] [ 2013-07-23 05:32:41.653 UTC ] [Mgmtca.execute:194]  Adding internal user2
[main] [ 2013-07-23 05:32:42.028 UTC ] [Mgmtca.isMgmtdbOnCurrentNode:306]  Management DB is running on blr-devdb-003local node is blr-devdb-003
[main] [ 2013-07-23 05:32:42.074 UTC ] [MgmtWallet.createWallet:54]  Wallet created
[main] [ 2013-07-23 05:32:42.084 UTC ] [Mgmtca.execute:213]  MGMTDB Wallet created
[main] [ 2013-07-23 05:32:42.085 UTC ] [Mgmtca.execute:214]  Adding user/passwd to MGMTDB Wallet
[main] [ 2013-07-23 05:32:42.210 UTC ] [MgmtWallet.terminate:122]  Wallet closed
[main] [ 2013-07-23 05:32:42.211 UTC ] [Mgmtca.execute:227]  Unlocking user and setting password in database
[main] [ 2013-07-23 05:32:42.211 UTC ] [Mgmtjdbc.connect:66]  Connection String=jdbc:oracle:oci:@(DESCRIPTION=(ADDRESS=(PROTOCOL=beq)(PROGRAM=/home/oragrid/bin/oracle)(ARGV0=oracle-MGMTDB)(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))')(ENVS='ORACLE_HOME=/home/oragrid,ORACLE_SID=-MGMTDB')))
[main] [ 2013-07-23 05:32:42.823 UTC ] [Mgmtjdbc.connect:72]  Connection Established

These are two internal users being referred above

select username,account_status from dba_users where username like 'CH%';

USERNAME		       ACCOUNT_STATUS
------------------------------ --------------------------------
CHM			       OPEN
CHA			       OPEN

12c: Exporting Database Views as Tables

Starting Oracle Database 12c, you can export view to be imported as a table. There is no need to individually export each table, Data Pump will dump a table with the same columns as the view and with row data fetched from the view.
It also exports objects dependent on the view, such as grants and constraints. To use this view,we need to use parameter VIEWS_AS_TABLES.

Let’s see this feature in action using a example.

We have created a view on emp,dept table to show employee details along with manager name

create view emp_view as select emp.EMPNO, emp.ENAME , emp.JOB,mgr.ename MGRNAME, emp.HIREDATE,emp.SAL ,emp.COMM,dept.DNAME DEPTNAME FROM
EMP emp,DEPT dept,EMP mgr
where emp.deptno=dept.deptno 
and mgr.empno(+)=emp.mgr order by 1;

Create a datapump directory to store datapump dumpfile

create directory dpdir as '/home/oracle/datapump';

Now we take datapump export and specify view name in views_as_tables

$ expdp system views_as_tables=scott.emp_view directory=dpdir dumpfile=emp_view.dmp logfile=emp_view_exp.log

Export: Release 12.1.0.1.0 - Production on Mon Jul 22 12:05:26 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
Password: 

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/******** views_as_tables=scott.emp_view directory=dpdir dumpfile=emp_view.dmp logfile=emp_view_exp.log 
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/VIEWS_AS_TABLES/TABLE_DATA
Total estimation using BLOCKS method: 16 KB
Processing object type TABLE_EXPORT/VIEWS_AS_TABLES/TABLE
. . exported "SCOTT"."EMP_VIEW"                          8.781 KB      14 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
  /home/oracle/datapump/emp_view.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at Mon Jul 22 12:05:52 2013 elapsed 0 00:00:20

We can see that this has exported 14 rows. To see if it actually works, we will import it but in a different schema (remap_schema does the trick here)

$impdp system remap_schema=scott:amitbans directory=dpdir dumpfile=emp_view.dmp logfile=emp_view_imp.log

Import: Release 12.1.0.1.0 - Production on Mon Jul 22 12:36:33 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
Password: 

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/******** remap_schema=scott:amitbans directory=dpdir dumpfile=emp_view.dmp logfile=emp_view_imp.log 
Processing object type TABLE_EXPORT/VIEWS_AS_TABLES/TABLE
Processing object type TABLE_EXPORT/VIEWS_AS_TABLES/TABLE_DATA
. . imported "AMITBANS"."EMP_VIEW"                       8.781 KB      14 rows
Job "SYSTEM"."SYS_IMPORT_FULL_01" successfully completed at Mon Jul 22 12:36:48 2013 elapsed 0 00:00:09

Let’s verify the data

SQL> show user
USER is "AMITBANS"
SQL> select * from emp_view;

     EMPNO ENAME      JOB	MGRNAME    HIREDATE	    SAL       COMM DEPTNAME
---------- ---------- --------- ---------- --------- ---------- ---------- --------------
      7369 SMITH      CLERK	FORD	   17-DEC-80	    800 	   RESEARCH
      7499 ALLEN      SALESMAN	BLAKE	   20-FEB-81	   1600        300 SALES
      7521 WARD       SALESMAN	BLAKE	   22-FEB-81	   1250        500 SALES
      7566 JONES      MANAGER	KING	   02-APR-81	   2975 	   RESEARCH
      7654 MARTIN     SALESMAN	BLAKE	   28-SEP-81	   1250       1400 SALES
      7698 BLAKE      MANAGER	KING	   01-MAY-81	   2850 	   SALES
      7782 CLARK      MANAGER	KING	   09-JUN-81	   2450 	   ACCOUNTING
      7788 SCOTT      ANALYST	JONES	   19-APR-87	   3000 	   RESEARCH
      7839 KING       PRESIDENT 	   17-NOV-81	   5000 	   ACCOUNTING
      7844 TURNER     SALESMAN	BLAKE	   08-SEP-81	   1500 	 0 SALES
      7876 ADAMS      CLERK	SCOTT	   23-MAY-87	   1100 	   RESEARCH
      7900 JAMES      CLERK	BLAKE	   03-DEC-81	    950 	   SALES
      7902 FORD       ANALYST	JONES	   03-DEC-81	   3000 	   RESEARCH
      7934 MILLER     CLERK	CLARK	   23-JAN-82	   1300 	   ACCOUNTING

14 rows selected.

We can see from dictionary that this is now imported as a table and not view

SQL> select * from tab;

TNAME			       TABTYPE	CLUSTERID
------------------------------ ------- ----------
EMP_VIEW		       TABLE

There are few restrictions for using this feature

-The view must exist and it must be a relational view with only scalar, non-LOB columns.
-VIEWS_AS_TABLES parameter cannot be used with the TRANSPORTABLE=ALWAYS parameter.

Reference

http://docs.oracle.com/cd/E16655_01/server.121/e17639/dp_export.htm#BEHDIADG 

12c: Sqlplus Displays Last Login Time For Non – Sys Users

12c database has introduced a pretty nifty Security feature which allows you to check last login time for non-sys user. e.g If I connect to scott user as below, it displays that I last logged in at Mon Jul 22 2013 09:06:07 +00:00. Time is displayed in local format (UTC in this case)

[oracle@oradbdev01]~% sqlplus scott/oracle

SQL*Plus: Release 12.1.0.1.0 Production on Mon Jul 22 09:14:25 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Last Successful login time: Mon Jul 22 2013 09:06:07 +00:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

If you wish to disable this feature, you can use -nologintime option

 [oracle@oradbdev01]~% sqlplus -nologintime scott/oracle

SQL*Plus: Release 12.1.0.1.0 Production on Mon Jul 22 09:16:37 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

Using connect on sqlplus will not display the last login time but will change the last login time counter