12c:Specifying lock timeout for finish_redef_table

Oracle 12c database has introduced enhancement to DBMS_REDEFINITION for specifying a a lock timeout (in seconds) which will allow FINISH_REDEF_TABLE to acquire an exclusive lock for swapping the source and interim tables.If timeout expires, then operation exits. This will help to   avoid cancellation by user or indefinite wait. This does not put our session in queue which will be shown by below example

1. Let’s test this on EMP table. I have created table in my schema and copied records.

CREATE TABLE "AMITBANS"."EMP"
   (	"EMPNO" NUMBER(4,0),
	"ENAME" VARCHAR2(10),
	"JOB" VARCHAR2(9),
	"MGR" NUMBER(4,0),
	"HIREDATE" DATE,
	"SAL" NUMBER(7,2),
	"COMM" NUMBER(7,2),
	"DEPTNO" NUMBER(2,0),
	 CONSTRAINT "PK_EMP" PRIMARY KEY ("EMPNO")
  USING INDEX) tablespace users;

2. Lets hash partition the table on empno. Note that I have excluded primary key constraint/index as we will copy it using dbms_redefinition

 CREATE TABLE "AMITBANS"."INT_EMP"
   (	"EMPNO" NUMBER(4,0),
	"ENAME" VARCHAR2(10),
	"JOB" VARCHAR2(9),
	"MGR" NUMBER(4,0),
	"HIREDATE" DATE,
	"SAL" NUMBER(7,2),
	"COMM" NUMBER(7,2),
	"DEPTNO" NUMBER(2,0)) tablespace users
  partition by hash(empno) 
  partitions 4;

3. Lets check if dbms_redefinition will work on this table

set serverout on
exec  DBMS_REDEFINITION.CAN_REDEF_TABLE('AMITBANS','EMP',DBMS_REDEFINITION.CONS_USE_PK);

There are no errors, so we are safe to proceed.

4. Next Start the redefinition

BEGIN
DBMS_REDEFINITION.START_REDEF_TABLE(UNAME =>'AMITBANS', ORIG_TABLE=>'EMP',INT_TABLE=>'INT_EMP',OPTIONS_FLAG=>dbms_redefinition.cons_use_pk);
END;
/

5. Copy dependent objects. (Automatically create any triggers, indexes, materialized view logs, grants, and constraints on AMITBANS.INT_EMP.)

DECLARE
l_num_errors PLS_INTEGER;
BEGIN
  DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS(UNAME =>'AMITBANS', ORIG_TABLE=>'EMP',INT_TABLE=>'INT_EMP',COPY_INDEXES=> DBMS_REDEFINITION.CONS_ORIG_PARAMS,
   COPY_TRIGGERS=>TRUE,COPY_CONSTRAINTS=> TRUE, COPY_PRIVILEGES=>TRUE, IGNORE_ERRORS =>TRUE,NUM_ERRORS=> l_num_errors,COPY_STATISTICS=>TRUE);
END;
/

We need to check if there were any errors while copying constraints. Use below query

select object_name, base_table_name, ddl_txt from DBA_REDEFINITION_ERRORS;

6. We can sync any new inserts by issuing following command.

 BEGIN 
  DBMS_REDEFINITION.SYNC_INTERIM_TABLE(UNAME =>'AMITBANS', ORIG_TABLE=>'EMP',INT_TABLE=>'INT_EMP');
END;
/

7. Let’s do finish redefinition, but before that we will do two insert in emp table from different session without committing transaction.
We will do it in following order to show that finish_redef_table needs to have no active dml on table.

session 1: Insert record 1

insert into emp values(8000,'HENRY','ANALYST',7698,sysdate,1500,null,10);

Session 2: Issue finish_redef_table to swap tables

BEGIN
DBMS_REDEFINITION.FINISH_REDEF_TABLE(UNAME =>'AMITBANS', ORIG_TABLE=>'EMP',INT_TABLE=>'INT_EMP',dml_lock_timeout=>300);
END;
/

Session 3: Open one more session and do insert

insert into emp values(8001,'MICHAEL','ANALYST',7698,sysdate,1500,null,10);

If you now commit transaction in session 1, redefinition (session 2) will  wait  (to acquire exclusive lock) for session 3 to commit .This shows that it is not in a queue and needs all active transactions to finish. When we commit session 3,  finish_redef_table succeeds.

Openfiler on Virtualbox and 12c Oracle Flex ASM

Last week Oracle released 12c database and Oracle blogosphere is bustling with lot of people posting details about new versions and setup.
I too decided to take plunge and setup my own RAC cluster. I had 4 machines with me but no shared storage 🙁
Long back I had done one installation using openfiler (Followed Jeff Hunter’s article on OTN) but in that case we installed openfiler software on base machine. Finally I decided to try installing it on Virtualbox.
I checked openfiler website and found out that they provide templates for VM which meant that installation on VM was supported. (Anyways this is my test setup 🙂 )

This was 64 bit machine, So I downloaded 64 bit rpm Virtualbox software.

When you try to install it, this will fail with following error

rpm -i VirtualBox-4.2-4.2.14_86644_el6-1.x86_64.rpm 
warning: VirtualBox-4.2-4.2.14_86644_el6-1.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 98ab5139: NOKEY
error: Failed dependencies:
	libSDL-1.2.so.0()(64bit) is needed by VirtualBox-4.2-4.2.14_86644_el6-1.x86_64

Solution : Install SDL package . You can either install SDL through yum or install same rpm using yum and it will find dependent SDL package and install it for you.

#yum install SDL
# rpm -i VirtualBox-4.2-4.2.14_86644_el6-1.x86_64.rpm 
warning: VirtualBox-4.2-4.2.14_86644_el6-1.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 98ab5139: NOKEY

Creating group 'vboxusers'. VM users must be member of that group!

No precompiled module for this kernel found -- trying to build one. Messages
emitted during module compilation will be logged to /var/log/vbox-install.log.

WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/.
Stopping VirtualBox kernel modules [  OK  ]
Recompiling VirtualBox kernel modules [  OK  ]
Starting VirtualBox kernel modules [  OK  ]

Add oracle user to vboxusers to allow oracle user to manage VM

usermod -G vboxusers oracle

Once done, you can download openfiler software from http://www.openfiler.com/community/download/. You can get direct link for 64 bit binary  at Sourceforge.net

Let’s Build Virtual Machine for our setup.

1. Start virtualbox GUI by issuing virtualbox on command line
2. Click on New VM and choose OS (Linux) and Version (Red Hat 64 bit, change it based on your OS)
3. Allocate memory to Machine. I opted for 3G
4. Create a virtual drive of 30G and use .VDI as format. Even though usage is less then 8G, openfiler install fails if you create 8G disk.
5. Once done, click finish and click on settings.
6. Choose Network as eth0 and Bridged adapter. We are using single adapter here
7. Modify Boot order and remove floppy.Keep hard disk as first option
8. In System storage, You can add additional hard disk (Say 100G) which will be used to setup ASM devices

You can use screenshots from oracle-base 12c install article . Note that we are using single adapter and Bridged adapter.

8. When you start VM, it will ask you to choose start-up disk. Choose your openfiler.iso image
9. Press enter on boot screen and then click next

openfiler1
10. Choose install language
11. Next choose hard disk which will be used for installing software. Choose 30G disk which we allocated earlier. Uncheck the second disk

openfiler2
12. Next screen is network configuration. This is most important screen. I used static IP configuration , click edit for eth0 and put all required information i.e IP,subnet,gateway.Also add hostname and DNS information here.

openfiler3

13. Next select timezone and set root password. Once done, you will get success message and reboot will be done. If everything is successful, you can find your setup at

https://hostname:446/ (note its https and not http)

If this doesn’t work then look at your ip settings and ensure that your ip is pingable from outside. More troubleshooting for GUI can be done by restarting openfiler and httpd service. If it gives error, you can troubleshoot further

service openfiler restart
service httpd restart

You can configure openfiler ASM volumes by following  Jeff Hunter article
In case you are on RHEL6, udev rules mentioned in above link will not work. Frits Hoogland article will be of help here (http://fritshoogland.wordpress.com/2012/07/23/using-udev-on-rhel-6-ol-6-to-change-disk-permissions-for-asm/)

You should be ready with ASM storage and can proceed with RAC install. There is not much difference in 12c install except that we have new feature called Oracle ASM Flex. I am documenting screenshots for same here

1. Choose Standard Cluster here. If you choose Flex cluster, it would force you to use GNS as option can’t be unchecked.

flexcluster1

2.  Choose Advanced Install

3. When choosing network interface, select ASM and private for private interface

flexcluster2

4. On screen 10, choose Oracle Flex ASM

flexcluster3

 

I did two Flex cluster setup with 3 node RAC and 2 node RAC and it seems to work at both places. Let’s see Flex cluster in action

You can verify if your ASM is enabled to use Flex mode using below command

[oracle@oradbdev02]~% asmcmd showclustermode
ASM cluster : Flex mode enabled

crsctl command can be used to set Flex mode later. Below command will show configuration

oracle@oradbdev02]~% srvctl config asm
ASM home: /home/oragrid
Password file: +VDISK/orapwASM
ASM listener: LISTENER
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM

[oracle@oradbdev02]~% srvctl status asm -detail
ASM is running on oradbdev02,oradbdev03,oradbdev04
ASM is enabled.

Lets reduce the ASM to run only on 2 nodes.This will stop ASM on one node

oracle@oradbdev02]~% srvctl modify asm -count 2

[oracle@oradbdev02]~% srvctl status asm -detail 
ASM is running on oradbdev02,oradbdev03
ASM is enabled.

There is no ASM on node oradbdev04 but db is still running

[oracle@oradbdev04]~% ps -ef|grep pmon
oracle    3949     1  0 10:27 ?        00:00:00 ora_pmon_snowy3
oracle   18728     1  0 07:59 ?        00:00:01 apx_pmon_+APX3

If you now try to start ASM on 3rd node, it will give error

[oracle@oradbdev02]~% srvctl config asm
ASM home: /home/oragrid
Password file: +VDISK/orapwASM
ASM listener: LISTENER
ASM instance count: 2
Cluster ASM listener: ASMNET1LSNR_ASM

[oracle@oradbdev02]~% srvctl start asm -n oradbdev04
PRCR-1013 : Failed to start resource ora.asm
PRCR-1064 : Failed to start resource ora.asm on node oradbdev04
CRS-2552: There are no available instances of resource 'ora.asm' to start.

Lets make it count back to 3 and ASM will start now

[oracle@oradbdev02]~% srvctl modify asm -count 3
[oracle@oradbdev02]~% srvctl start asm -n oradbdev04
[oracle@oradbdev02]~% srvctl status asm -detail     
ASM is running on oradbdev02,oradbdev03,oradbdev04
ASM is enabled.

Now need to explore some other new feature

portmap: unrecognized service on RHEL6

Quick note for people using NFS for shared storage on RAC database. Till RHEL5 we had to ensure nfs,nfslock and portmap service has to be running.
These services are required otherwise you will get following errors while mounting database

ORA-00210: cannot open the specified control file
ORA-00202: control file: '/u01/oradata/orcl/control01.ctl'
ORA-27086: unable to lock file - already in use

Mostly this could be auto-enabled on boot by using chkconfig command. While working on similar issue today, I found out that this service is not present in RHEL 6

# service portmap status
portmap: unrecognized service

The portmap service was used to map RPC program numbers to IP address port number combinations in earlier versions of Red Hat Enterprise Linux.
As per RHEL6 docs, portmap service has been replaced by rpcbind in Red Hat Enterprise Linux 6 to enable IPv6 support.
So following command will work

# service rpcbind status
rpcbind (pid  1587) is running...

You can read about NFS and associated processes from RHEL6 docs

Recovering from Lost AWS EC2 Key

This article is for AWS users using EC2 instances.
Summary: Please backup your AWS EC2 Key file and you will never have to go through below pain 🙂

AWS allows you to create a key-pair which can be downloaded on your machine and it is used for logging into your EC2 instance.
I had been using the same for accessing the EC2 instance for this blog. But last week my corporate laptop was upgraded and I took backup of all files except this key file.

I didnt realize this until saturday when I wanted to login to my EC2 instance. When I couldn’t find the file, I thought of downloading it again from AWS console.
But it was no where to be found. I searched and found out that we can’t download this file again.

Only solution to this problem is to create new EC2 instance with new key pair and then move site . I have documented high level tasks which can be used to restore. These steps are performed on AWS micro instance and its assumed that all your files are lying on EBS volume. My earlier article on AWS setup can be used for configuring EC2 instance.

1. Create new ec2 instance with new key-pair and make multiple backup copies of this key file and store at multiple location. Next install apache,php,mysql or any other component required for your site. Its better to create EC2 instance in same region as your old instance like us-east-1a.
2. Assign new elastic ip address to this EC2 instance.
3. If your site is working then you can login to wordpress panel and take mysql backup which can be restored to the new ec2 instance. I use WP-DBManager plugin for mysql backups.
4. Next take snapshot of your existing ec2 volume (hosting blog document directory) . This has to be done via EC2 dashboard ->Elastic Block Store ->Snapshots .This has to be in same region as new EC2 instance as we will be mounting the volume. On completion of snapshot, create a new volume out of this snapshot.
5. We need to attach this volume to new EC2 instance.  Go to EC2 dashboard ->Elastic Block Store ->Volumes. Choose the new volume and actions->attach volume and choose new EC2 instance name. Give volume name as /dev/xvdh
6. Within seconds, you should see this volume in your EC2 instances. e.g

cat /proc/partitions 
major minor  #blocks  name

 202        1    8388608 xvda1
 202      112    8388608 xvdh

7. Create a new directory and mount this volume
#mkdir /newdir
#mount /dev/xvdh /newdir

8. You can copy your blog files from /newdir/var/www/html (document root). Also any config files can be restored from old machine.
9. Import the mysql database from mysql dump
10. Once this is done, you can test whether your site is working fine.
e.g

UPDATE wp_options SET option_value = replace(option_value, 'http://oldsite', 'http://ec2-instance-name') WHERE option_name = 'home' OR option_name = 'siteurl';
UPDATE wp_posts SET guid = replace(guid, 'http://oldsite','http://ec2-instance-name');
UPDATE wp_posts SET post_content = replace(post_content, 'http://oldsite', 'http://ec2-instance-name');

This way you can verify if site is working fine. Once done restore back the old name by modifying above sql or reimporting the mysql db.

11. Final step is to switch site to new EC2. I am using cloudflare for DNS management which allows instant propagation of DNS change.So I assigned new elastic IP to this EC2 instance and changed my DNS record to point to this new EC2 instance.
Other option could be that you can disassociate your old elastic IP from old EC2 and attach to the new EC2 instance.

cursor: pin S wait on X

Few weeks back thread came up on oracle-l freelist regarding “cursor: pin S wait on X” event . Tanel Poder replied to this thread with excellent explanation. You can read full thread here

 

This wait happens when your session wants to examine / execute an existing
child cursor – but it’s being pinned in X mode by someone else. The usual
reasons for this are:
1) The cursor got flushed out for some reason (or it’s just the first time
anyone is running it) and the other session is currently hard
parsing/compiling/loading the contents of this cursor back into library
cache. The longer the parsing/loading takes, the longer you wait. If the
parsing session hits some (performance) issues, this pin may be held for a
long time.

or

2) The child cursor is already loaded into cache, but someone else is
currently flushing it out (shared pool shortage, fragmentation & loading of
a big object, SGA target / ASMM reducing the SGA size).

So as explained above, this could be seen as side affect of setting SGA_TARGET in environment. I have also observed this behavior in RAC system where shared pool (>10G) is under pressure and LCK0 process is trying to release memory which causes lot of processes to wait for this event. At this time database is completely hung. “cursor:pin S wait on X” might be reported for lot of sessions but we should see if shared pool free memory is less and try to fix it.

Moving WordPress Blog to Amazon Web Services

[—ATOC—]
[—TAG:h3—]

Why I moved?

Last Month I had moved my blog hosting from Dreamhost to Amazon Web Services(AWS). I decided to change hosting provider as my blog suffered 17  outages (as per pingdom report for month of December 2012)

 

Uptime Outages Response time
97.33% 17 2359 ms

Dreamhost mentioned that my wordpress processes were running out of memory (30M) and it was being killed by their memory manager. They recommended to either uninstall some of plugins or move to Virtual Private server at additional cost of $15 per month (300M memory)
At this point I decided to explore other hosting providers. I had read about Amazon web service and searching on internet revealed articles detailing setup (mentioned in reference section below). I decided for AWS as it provides free account for 1 year which was fine for me to test

Setting up AWS EC2 Instance

To start with we need to create AWS account at http://aws.amazon.com. Amazon offers one full free year of AWS use and offers micro instance (with 613M memory) as part of package. You can refer to AWS free tier usage page to know details about services part of this package

After you create a account and login , click on EC2 page and then Launch Instance. We need to create a EC2 (Virtual Instance) for starting our work. Use Classic Wizard

aws_0

I selected Amazon Linux AMI 2012.09 (64-bit) option

aws_1

Select Instance Type as T1.Micro Instance Type aws_2

Micro Instance allows only EBS volume

aws_3

Provide a tag for server so that you can identify your servers distinctly
aws_4

We need to create a key pair for connecting to EC2 instance. I have already created key pair and going to use it. You can also create a new key pair. Once you are done creating key pair and you have to download the keys and save it on desktop. Important note: Preserve this key as if you loose you cant download the key again and will loose access to EC2
aws_5

Choose Security group (can use default)

aws_6

Final confirmation screen lists down option selected. Launch instanceaws_7

 

Once we launch instance, this will start a Virtual machine and would have Public DNS name like ec2-54-243-14-65.compute-1.amazonaws.com. 

aws_8

This is ok if we are accessing hosts but we can’t use it for assigning web address.To fix this, we need to allocate IP address. We need to go to Elastic IP’s page and click on “Allocate New Address”. Next select the IP and associate the IP address with the Virtual Machine

aws_9

We can see that we have IP allocated to the Virtual Machine.aws_10

To access the machine, we need to use the private keys downloaded earlier.

ssh ec2-user@ ec2-54-243-14-65.compute-1.amazonaws.com -i ~/.ssh/amitbansal.pem

Configuring EC2 Instance

To host website, we need 3 components. Apache server, Php and Mysql Database.

 We will use yum to install all the components. Starting with Apache Web Server

yum install httpd

Start the Apache Web Server:

service httpd start

To install PHP, type:

yum install php php-mysql php-devel
yum install php-mbstring
yum install php-xml

I faced error after setting up wordpress captcha ,si-captcha.php plugin says GD image support not detected in PHP!

To avoid it install php-gd

yum install php-gd

Set apache directory permission as following

chown -R apache /var/www/html
chmod -R 755 /var/www/html

Restart the Apache Web Server:

service httpd restart

Next we install MySQL

yum install mysql-server

Start MySQL:

service mysqld start

Create your “blog” mysql database:

mysqladmin -uroot create blog

We need to Secure mysql database. This is required as we need to avoid remote root connections and remove anonymous user.

mysql_secure_Installation

Use following options:

Enter current password for root: Press return for none
Change Root Password: Y
New Password: Enter your new password
Remove anonymous user: Y
Disallow root login remotely: Y
Remove test database and access to it: Y
Reload privilege tables now: Y

To install WordPress, type:

cd /var/www/html
wget http://wordpress.org/latest.tar.gz
tar -xzvf latest.tar.gz
mv wordpress weblog

Create the WordPress wp-config.php file:

cd weblog
mv wp-config-sample.php wp-config.php
vi wp-config.php

Modify the database connection parameters as follows:

define(‘DB_NAME’, ‘blog’);
define(‘DB_USER’, ‘root’);
define(‘DB_PASSWORD’, ‘YOUR_PASSWORD’);
define(‘DB_HOST’, ‘localhost’);

If we try accessing http://54.235.166.226/weblog/ , this will not work. To make this work, we need to modify the “Security Group” and add HTTP/HTTPS rule. Once done we can use the website

Migrating WordPress Data

After the wordpress default setup is done, we need to migrate the mysql data and wordpress files from existing installation.

1)Take backup of mysql database using mysqldump or phpadmin tool.

Note: – If you have set mysql backup plugin and have been getting emails, ensure that all tables are selected in backup. I did mistake of not checking it and there were few tables which were missing.

Then import data using phpadmin or mysql command. Below we are importing data in “blog” database

mysql -u root -ptest blog <mysql_data.sql

2) Copy all the files including plugins/themes,images from existing installation to new site.

3) Run following commands in mysql database to try out temporary site

UPDATE wp_options SET option_value = replace(option_value, 'http://askdba.org/weblog', 'http://54.235.166.226/weblog') WHERE option_name = 'home' OR option_name = 'siteurl';
UPDATE wp_posts SET guid = replace(guid, 'http://askdba.org/weblog','http://54.235.166.226/weblog');
UPDATE wp_posts SET post_content = replace(post_content, 'http://askdba.org/weblog', 'http://54.235.166.226/weblog');

4)Next login to WordPress Dashboard and update the permalinks setting which will fix the permalinks.

Optimizing Webhost Setup

If we run with default setup, our site would suffer downtime as server will run out of memory. We can see Out of memory errors in apache error logs. AWS EC2 instances have 613M RAM and are not configured with swap. You can configure

But instead of configuring swap, I decided to tune the apache server threads and other parameters. Following changes were made to Apache config file /etc/httpd/conf/httpd.conf

Timeout 30
KeepAlive on
MaxKeepAliveRequests 50
KeepAliveTimeout 15
#Changes to prefork module
<IfModule prefork.c>
StartServers 3
MinSpareServers 2
MaxSpareServers 5
ServerLimit 256
MaxClients 10
MaxRequestsPerChild 1000
</IfModule>

After saving file, we need to restart apache process. Let’s wait and tune php and mysql database.

To tune mysql memory, I used following values

[mysqld]
port		= 3306
socket		= /var/lib/mysql/mysql.sock
skip-external-locking
key_buffer_size = 16M
max_allowed_packet = 1M
table_open_cache = 64
sort_buffer_size = 1M
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M
query_cache_limit       = 1M
query_cache_size        = 16M

Next we modify memory available for php. We allocated 128M

# grep memory_limit /etc/php.ini
memory_limit = 128M

Once done, restart mysql server

service mysqld restart

This might fail as we made changes to innodb parametes. To fix it remove /var/lib/mysql/ib_logfile* files and restart mysqld again.

Now restart apache server

service httpd restart

Making actual Move

By now, you will be able to determine if everything is working fine with your site on AWS. To make actual move, take fresh backup of mysql database and restore it to new site. You can follow step 1,2  in “Migrating WordPress Data” .

Now move to your webhosting portal and go to DNS management and modify your A record and update with Amazon Elastic IP address associated with EC2 instance. DNS propogation may take some time. I have signed up with cloudflare (Free account0 and pointed my Dreamhost Name servers to cloudflare , so DNS management is done in Cloudflare and propagation happens in less then 5 mins.

You can rename your directory at old hosting site to ensure that you are not logging in at old site. Once logged in new site Dashboard, update permalinks. You are done moving your site:)

Optimizing WordPress with w3tc

W3 Total Cache is most popular caching software and is highly recommended for any website using wordpress. You can use w3tc to further optimize the wordpress by using caching and CDN.

Caching

You can cache the pages,DB queries, objects to OS so that they are served directly from cache. If using AWS or any virtual hosting you can also make use of APC(Alternative PHP Cache) to cache these files. To use it we need to first install the rpm and then restart the apache server

#yum install php-pecl-apc

You need to allocate memory to APC. It is controlled by /etc/php.d/apc.ini and shm_size parameter.
apc.shm_size=64M

#service httpd restart

Once done, w3tc will enable  opcode:Alternative PHP Cache(APC) option for caching.

CDN

You can make use of CDN network to cache images,files. This can relieve some load on your webhosting server. Amazon AWS provides Amazon Cloudfront and Amazon S3 storage.

Configuring S3 storage is easy. You go to S3 console on AWS dashboard and create a bucket.e.g cacheserver.askdba.org. Edit the properties of this bucket and add permission and grant view permission to everyone.Next go to WordPress Dashboard ->Performance ->General and choose S3 storage. Then go to Dashboard ->Performance ->CDN and enter information shown below. You can get access key and secret key from Security Credentials page

aws_s3_cdn

Add a cname entry in DNS portal.It should be cacheserver.askdba.org should be alias of s3.amazonaws.com.  Next you can upload the attachments ,plugins,theme files to S3 (Find under CDN->General)

If you want files to be served with low latency, then you can make use of Amazon Cloudfront which provides caching at various data centres like US,Europe, APAC(Tokyo) and APAC(Singapore).

To setup Cloudfront, go to AWS Cloudfront console. Next click on create distribution and in origin Domain name, enter your domain. You can also use S3 storage to serve these files. Provide list of cname which you would like to use. Cname is useful as otherwise images will show url as dfkkfkk.cloudfront.net/1.img. You can specify upto 10 cname. Also create these aliases in your DNS portal (cname entries) In Price class use default i.e All locations.

Next go to  Wordpress Dashboard ->Performance ->General and choose Amazon Cloudfront (under origin pull).We then need to update CDN entries as below  (get Cloudfront domain name under distribution settings in AWS console).

aws_cloudfront

Deployment would take around 10-15 mins.

Is Free AWS actually Free?

I had mentioned above that Amazon AWS provides free account for 1 year. But in reality it is not Free!! Reason is that Micro instance comes up with EBS volume and have 2 million I/O as monthly free quota. But I noticed that EC2 instance does I/O of 1 Million I/O daily (OS,apache,mysql takes it). This costs 0.1$ per 1 million I/O and it also reflects in my monthly Amazon AWS Billing statement with total charge of $2.57 (upto 25 Jan). So this Free account would cost $3 for month (if you use cloudfront or S3 storage, there will be additional charges)

 

Charges I/O’s Cost ($)
$0.10 per 1 million I/O requests 25,727,955 2.57

How much will it cost?

As per AWS EC2 pricing plan  , if we host site in US East (N.Virginia) following rates apply

aws_cost
So if we go for 3 year reserved micro instance , it would cost us

Upfront payment ($100 for 3 years) 100/36 $2.78
Monthly payment (30 days) 24*30*0.005 $3.6
EC2 I/O $3.0
Amazon Cloudfront/S3 $0.5


This would cost nearly $10 per month which is not bad (Plus additional $10 for Domain). Note that this pricing is applicable only for 3 year reserved instance and would cost more for other plans

Summary

Amazon AWS provides lot of benefit and flexibility for hosting websites . Big websites can even make use of  Amazon RDS database which can provide high availability. Using AWS allows you to move to bigger servers for hosting websites very quickly (you can clone the host using EBS volume) .Downside of using AWS for wordpress blog is that you need to do lot of setup and maintenance might not be easy task.You might also experience some downtime if things are not configured properly .

References

http://coenraets.org/blog/2012/01/setting-up-wordpress-on-amazon-ec2-in-5-minutes/
http://imperialwicket.com/tuning-apache-for-a-low-memory-server-like-aws-micro-ec2-instances