Saurabh Sood

Worked with prestigious Financial Institutions and Product Based MNCs e.g. Swiss Bank (UBS), CitiBank, Qatar Central Bank (QCB), Societe Generale, Oracle Corporation & Dell. An Oracle DBA who is up-skilling himself to help large enterprises move their data to cloud and get more insights into data to make it useful for them.

Azure BLOB Storage As Remote Backend for Terraform State File

Terraform supports team-based workflows with its feature “Remote Backend”. Remote backend allows Terraform to store its State file on a shared storage
so that any team member can use Terraform to manage same infrastructure. A state file keeps track of current state of infrastructure that is getting
deployed and managed by Terraform. Read more about Terraform state here.

The remote shared storage can be:
– Azure Blob
– Amazon S3
– Terraform Cloud

In this post I will be using VSCode to:
-Login using Azure account named “terraform” (this account has only been assigned storage-contributor role)
-Use Azure service-principal configuration in Terraform
-Configure Terraform to store state-file on Azure Blob storage to create an Azure resource group

As a first step to demonstrate Azure service-principal usage, login as terraform user from azure portal and verify that this user doesn’t have privileges to create a resource group.

Now, to login as terraform user in Azure, open VSCode and click on View => Command Palette and type Azure: Sign Out

By this, you are making sure to logout of any previous Azure session. Now, click Azure: sign-in  command palette, it will open a browser and ask you to sign-in, use the Azure account(terraform) created for this sign-in purpose.  You will see the username at the bottom of VSCode.

We will now be configuring the already created Azure service-principal in Terraform (refer to my previous blog post). The service-principal looks like below after it is created, and it is important to note down these details for a service-principal:

These values of Azure service-principal map to the Terraform variables as below: (Click here to read more about it in Terraform document)

appId = Client-id in Terraform
password = client-secret in Terraform
tenant = tenant-id in Terraform

We will configure these login details in Terraform using variables.tf file. Defining variables can be done in a single step or by using input variable file to hold the variable names and using variable definition file (.tfvars). Terraform automatically loads variable definition files if :
– File name is exactly terraform.tfvars
– File names ending with .auto.tfvars

For more on Terraform variables refer to Terraform-Variables

Configuring Terraform files

Created a folder on your PC with name “tfstate” to hold variables.tf, main.tf and terraform.tfvars files.

variables.tf

variable “azure_app_id” {
description = “Azure App Id for Service Principal”
type = string
}
Variable “azure_password” {
description = “Azure Password for service principal”
type = string
}
Variable “azure_tenant_id” {
description = “Azure tenant for service principal”
type = string
}
variable “azure_subscription_id” {
description = “Azure subscription-id for service principal”
type = string

terraform.tfvars

azure_app_id=”xxxxxx-xxxx-xxxx-xxxx-xxxxxx”
azure_password=”xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx”
azure_tenant_id=”xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx”
azure_subscription_id=”xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx”

main.tf

terraform {
backend “azurerm” {
resource_group_name = “RGaskdba”
storage_account_name = “soodstore”
container_name = “container4terraform”
key = “DEV_TFSTATE”
}
}

provider “azurerm” {
version = “~>2.0”
client_id = var.azure_app_id
client_secret = var.azure_password
tenant_id = var.azure_tenant_id
subscription_id = var.azure_subscription_id
features {}
}
resource “azurerm_resource_group” “test” {
name = “terratestRG”
location= “Australia Southeast”
}

A “Backend” in Terraform determines how the state is loaded, here we are specifying “azurerm” as the backend, which means it will go to Azure, and we are specifying the BLOB resource group name, storage account name and container name where the state file will reside in Azure. “Key” represents the name of state-file in BLOB.

This code will create a resource group named terratestRG in Australia Southeast region.

Once these files are created in “tfstate” folder, go to VSCode and open the folder using File =>Open Folder => tfstate , click command palette and run “Azure: Terraform init”

A pop-up will appear asking  to “Open Cloud Shell” click yes.

It has initialize azurerm backend and installed required plugins for azure and it will copy all the local terraform files in tfstate folder to Azure file-share.

Click on Azure :Terraform Plan in Command Palette, following plan will be created.

 

It has initialized a plan to create mentioned resource group and at the same time we can see that Terraform state-file “DEV_TFSTATE” is created in the specified container

Click on Azure :Terraform Apply in Command Palette and it will create the resource group.

Azure resource group is created using VSCode, Azure service-principal and placing Terraform state-file under Blob storage.

Azure Infrastructure Automation With Terraform: Configuration

In this article I will explain how to configure Terraform to automate Azure infrastructure deployments.

There are multiple ways to configure Terraform to work with Azure, I prefer following two:

  • Configuring “Azure Terraform” Visual Studio Code(VSCode) extension.
  • Configuring Terraform using Azure Cloud Shell.

“Azure Terraform” VSCode Extension:

Prerequisites:

Azure Subscription: If you don’t already have an Azure Subscription, create one.

Terraform: Install and configure Terraform

Visual Studio Code: Install VSCode

Node.js: This is required to get Azure login page from VSCode. To download click here.  To verify the installation, run node -v  from a terminal window. It may ask you to execute node -c, you have to do it otherwise the Azure login page will not appear.

GraphViz: This is optional, used to get graphical interpretation of Terraform init, plan etc. If you need to, download and install GraphViz  

Installing Azure Terraform VSCode extension

Launch Visual Studio Code and select Extensions

 

In search extension type @installed to check which extensions are already installed in you VSCode

Search Azure Terraform in extension search box

Select Install, when you install this extension, Azure Account extension will be automatically installed in your VSCode. Use @installed in the search box to get list of installed extensions

 

Here you will see Azure Terraform and Azure Account are installed for you to use.

 

Configuring Terraform using Azure Cloud Shell

Prerequisite is only to have an Azure subscription. If you are opening Azure CLI for first time, it will ask for a mounted file share, if you don’t have it already, it will ask you to create it and will be mounted as clouddrive under you $HOME directory. Click on highlighted icon to launch Azure cloud shell.

 

Install Terraform: Cloud Shell automatically have latest version of Terraform installed, so there are no additional installation steps required.

NOTE: Automation tools like Terraform should always have restricted permission and use azure service principal to authenticate themselves.

Now we will create a service principal for Terraform, which it will use to login to Azure subscription.

From Azure cloud shell run following command:

 $ az account show    (This will list your subscription-id )

 $ az ad sp create-for-rbac –role=”Contributor” –scopes=”/subscriptions/<your-subscription-id>” –name=”SPterraform”   

Above command will create a service principal named SPTerraform.

The randomly generated password can’t be retrieved, so make sure to save it. Now, login using this service principal:

$ az login –service-principal -u “http://SPTerraform” -p “password-shown-above” –tenant “tenenat-shown-above”

You have logged-in as a service principal to run Terraform and Azure CLI is ready to use. 

This complete the basic setup to run Terraform with Azure, In my upcoming posts I demonstrate how to use this setup to write Terraform code using “Azure CLI” & “Azure Terraform” VSCode.

How To Configure Exadata Database Machine in Enterprise Manager Cloud Control 13c (OEM13c)

I have followed the steps in Oracle Documentation link: https://docs.oracle.com/cd/E63000_01/EMXIG/ch2_deployment.htm#EMXIG215 to configure Exadata Database Machine in OEM13c. If you want to configure your Exadata in OEM13c you have to follow the above mentioned link.
In this post I will share the mandatory steps for configuration, and some of the issues which I faced while configuring the Exadata on OEM13c.
NOTE: OEM13c agents only needs to be deployed on compute nodes on your Exadata Machine.

Step 1: Deploy Exadata Plug-In in OEM13c.

Step 2: For an EM agent to communicate with ILOM SP, there must be a user created on ILOM SP, on all the Compute Nodes.
Create A Database Server ILOM SP (Service Processor) User.
Login to Compute Node ILOM with “root” user”
# cd /SP/users
# create oemuser
Creating user…
Enter new password: ********
Enter new password again: ********

Created /SP/users/oemuser

Change to the new user’s directory and set the role:

# cd oemuser
/SP/users/oemuser

set role=’cro’
Set ‘role’ to ‘cro’

Now test the ILOM user ID created:

For Exadata X5-2:
# ipmitool -I lanplus -H <ComputeNodeILOMHostname> -U oemuser -P xxxxxx -L USER sel list last 10
It should display some results.

Now run the above steps on all Compute Node ILOMs.

STEP 3: Push the OEM agent to Compute nodes.
From OEM13c console, select Setup from top right corner, and then Add Target, and the Add Target Manually. Put the Compute Node’s hostname, and then select your OS version. Fill-in the rest of the details on the screen and click Deploy.  It will Deploy the agent on the Compute Nodes you have mentioned.

 

Step 4: Run discovery Precheck Script:
To ensure that discovery of Exadata Machine complete without any issues, you need to run exadataDiscoveryPreCheck.pl. This script is available under OEM13c OMS server Exadata plug-in location i.e:
<OMS_agent installation directory>/plugins/oracle.sysman.xa.discovery.plugin_12.1.0.3.0/discover/dbmPreReqCheck/exadataDiscoveryPreCheck.pl, verify the path as per your configuration and run the script. You can also download the script from MOS Note: 1473912.1.

NOTE: For Infiniband user you have to use “nm2user” and its default password is changeme.

This script showed following errors to me:
Verifying setup files consistency... 
------------------------------------ 
Verifying cell nodes... 
Cell node <CellNode Name> is missing in one of the setup files. 
Cell node <CellNode Name>.domain is missing in one of the setup files. 
Cell node <CellNode Name>.domain is missing in one of the setup files. 
Cell node <CellNode Name> is missing in one of the setup files. 
Cell node <CellNode Name> is missing in one of the setup files. 
Cell node <CellNode Name>.domain is missing in one of the setup files. 
Verifying infiniband nodes... 
Infiniband node <IBNode Name>.domain is missing in one of the setup files. 
Infiniband node <IBNode Name> is missing in one of the setup files. 
Infiniband node <IBNode Name>.domain is missing in one of the setup files. 
Infiniband node <IBNode Name> is missing in one of the setup files. 
Infiniband node null is missing in one of the setup files. 
Verifying KVM nodes... 
KVM node null is missing in one of the setup files. 
Verifying PDU nodes... 
PDU node <PDUNode Name> is missing in one of the setup files. 
PDU node <PDUNode Name> is missing in one of the setup files. 
PDU node <PDUNode Name>.domain is missing in one of the setup files. 
PDU node <PDUNode Name>.domain is missing in one of the setup files. 
Setup files are not consistent ===> Not ok 
* Please make sure that node information in both parameter and schematic files 
is consistent. 
======================================================= 
* Please make sure ciphers are correctly set in all cell and compute nodes. 
Verifying SSH cipher definition for <CellNode Name> cell node... 
None of the expected ciphers were found in sshd_config file ===> Not ok 
* Please make sure ciphers are correctly set in sshd_config file. 
== =========================================================

 So there were two issues:
1. Parameter file and Schematic file were not in sync with each other.
2. Missing valid cipher in cellnodes’ sshd_config file.
For parameter files issue, we need to check two files under /opt/oracle.SupportTools/onecommand, em.params and databasemachine.xml, and have to make sure that entries are same
in these files. In my case all the names under em.params were with fqdn and under databasemachine.xml these were without fqdn. I modified em.params to remove the fqdn from
all names.
For cipher issue, as the compute nodes did not error out for valid ciphers, I have copied one cipher entry from Compute Node to all the Cell Nodes and restarted the sshd service.

After making these two changes I ran exadataDiscoveryPreCheck.pl script again and it came out clean.

STEP 5: Discovering an Exadata Database Machine

1. From the Enterprise Manager home page, select the Setup menu (upper right corner), Add Target, and then Add Targets Manually.

2. On the Add Targets Manually page, click Add Targets Using Guided Process. From Add Using Guided Process window, select Oracle Exadata Database Machine from the list and click Add.

3. On the Oracle Exadata Database Machine Discovery page, select one of the following tasks:
13c target type
12c target type
I opted for 13c target type.

4. On the Discovery Inputs page, enter the following information
For the Discovery Agents section:
Agent URL: The Agent deployed on compute node. Click the search icon to select from available URLs.
For the Schematic Files section:
Once you have specified the Agent URL, a new row (hostname and schematic file information) is automatically added. The default schematic file, databasemachine.xml, describes the hardware components of the Exadata Database Machine.
Click Set Credential to set the credentials for the host.
Check/modify the schematic file location.
Select the schematic file name from drop-down menu.

5. On the InfiniBand Discovery page, enter the following information:
IB Switch Host Name: The InfiniBand switch host name. The IB Switch host name is usually pre-populated.
InfiniBand Switch ILOM host credential: The user name (usually ilom-admin or ilom-operator) and password for the InfiniBand switch ILOM host.

Rest of the steps are self explanatory and can be filled easily.

On Credentials page, after filling root password, you will get two options under SNMP credentials:
— Credential Type SNMPV1
— Credential Type SNMPV3
I opted for SNMPV3 and it requires EXACLI username/password. So you have to create ExaCli user as described at
http://docs.oracle.com/cd/E50790_01/doc/doc.121/e50471.pdf on Page 384.
Create the Exacli user and provide the information asked under SNMPV3.

Click Submit and it will take some time to discover the Exadata DB Machine.

After this, you can see “Exadata” under Targets tab on OEM13c home page.

 

Is Your Data Center Ready For Exadata Machines!!

Before opting to procure Exadata machines, the first thing you need to check is the readiness of your data center to hold these machines. Exadata’s dimensions are similar to other normal racks i.e a 42U rack. The complete details for X5-2 physical site requirements can be found HERE.

This is very important step as, for some data centers its may take months to complete the site requirements and you do not want to put your Exadata in a store due to issues like space unavailability etc. in data centers. There are three important things that you must consider before buying it:

  1. There should be enough space in data center to hold this machine.
  2. The power requirement should be considered carefully as the machine will be customized to client specific requirement i.e single phase or three phase. Oracle will ask you, before you place the order, about your power settings in data center. You have to tell them that you want a machine with single phase or three phase power settings. I think this cannot be changed after machine’s delivery, but not sure.
  3. The most important one is the network requirements. Setting up Exadata will require some heavy pre-calculation for your network. The IPs, network switches, 10G uplinks, patch panels, Fiber ready modules in core switches. Only the management access is through 1G copper wire, whole client access should be done through 10G fiber connectivity.

Once all these things are ready in your data center, you are ready to deploy Exadata.

 

Exadata X6-2 Launched

5th April 2016, Oracle announced arrival of Exadata X6-2. Overview of all its capabilities can be found HERE. Check DataSheet for list of hardware components and capacity of each X6-2 rack.

Sometimes, Oracle changes hardware specifications for its racks e.g X5-2 was released with 4TB HDD initially, but after few months the datasheet was modified and Oracle started delivering X5-2 with 8TB HDD in storage cells. So, it is advisable to check the datasheet before ordering it to make sure that what you will be getting eventually.

 

Planning Database Hardware Upgrade !!! Consider Oracle Exadata

Upgrading database hardware in an organization is always a cumbersome process. The most time consuming step is, planing for the upgrade, which mainly includes choosing right hardware for your Oracle databases. After deciding on the hardware type for your databases, rest will be taken care by technical teams involved. I will be discussing, how I reached to the conclusion of implementing Exadata in my organization.

I am dealing with multiple hardware vendors(HP, IBM, Dell etc.) and different type of bare metal servers(Rack mounted, blade machines etc.). Also to add on, I do have virtualization for some databases. Most of the servers are old, and needs to be replaced. The reason was, we started to face issues like server end of life, frequent hardware failures, performance degradation etc. To start with the hardware upgrade process, I checked multiple solutions from different vendors like, HP Converged Systems, Oracle SuperCluster, all blade environment(quarter/half/full height), all virtualized  environment(Hyper-v, VMware, Oracle VM), Oracle Exadata, EMC v-Block etc.

I had to decide on the best solution based on TCO(Total Cost of Ownership) and annual maintenance cost. A significant time is spent analyzing and comparing all these solutions. After completing four months extensive study on financial impact and overall hardware performance, Exadata comes out to be the best of all because of following main reasons:

  • It reduced the space required for database servers in DataCenter. I will be removing at least four racks in my DataCenter with Exadata implementation.
  • Eliminate external SAN storage need for databases which save huge amount of money.
  • SAN switches( Cisco MDS etc.) not required for server to SAN connectivity. This is usually a hidden component in DataCenters which consumes a lot of money and is often a bottleneck for database I/Os if not configured properly.
  • Its COD feature(Introduce from X5-2) saves a lot of money if the resource requirement is less.
  • Licensing cost of Oracle Databases will be reduced as new DBs can be added without any additional cost.
  • Its Storage Software which will process the business logic and return the processed results to compute nodes. It completely changes the traditional processing of queries and move the query processing to storage servers.

The above mentioned reasons are only a few, there are many more benefits and new features which I will be discussing in my future posts for exadata.

If you are deciding to revamp your data-center for hosting databases, it is good to consider Oracle Exadata.