Multi-tier Infrastructure

“The Lamp stack is good for developers on his machine but not ideal for a business architecture. The reason is single point of failure ”

Why a LAMP Stack is no good for design.

Working with a LAMP is ideal for a developers. They have their Operating System, Middleware, Database, Package and Application… whether it be linux, apache, mysql, php and index.php or windows, iis, ms sql, .net and .aspx . But from an infrastructure point of view this does not work by reason of Single Point of Failure.  If one point of the stack goes, the whole stack fails.   

As modern days business transactions and interactions are online, High Availability is paramount. From education to food industry everything is digital. We must make sure the infrastructure is Highly Available and connected to a Load Balancer so that every customer experiences the same engagement with the business portal. We must design a solution so that the first customer’s experience is the same as the million’th customers.  

Auto scaling  

There are 4 companies A, B, C and D who pay salaries to the same Bank Santander. From Santanders perspective, the whole month is quiet except for the 30th or 31st when salaries get paid in. An influx transactions occur because its payday. The banks infrastructure must be able to scale up and down based on the traffic dynamics. The load balancer can only distribute traffic is the resources are available.  



If one machine fails, in the time it takes for a user to log in and check, with that time the machine should heal itself unattended without human intervention. Infrastructure that is nimble and agile both at the same time.  


The resolution to this is Multitier infrastructure. Presented in the diagram is a multitier. It begins with a load balancer distributing traffic between 2 availability zones, which are Amazons alias for data centres. There are 4 instances, 2 within each zone but the database is constructed independently from the instances.  

The relational Database (RDS) is contained within its own entity and can be queried as an outsource. The database consist of a Master and Slave and 2 replicas. One for read and one to Write. 

I intended to have a database that is outside my stack, a centralized database.  Ill add a load balancer endpoint to the database (not depicted in the diagram). We have a master database. When Mike goes on and puts his shoes in the basket, the traffic took the first server route. When he logged back on to look for his basket listing, the traffic proxied another server as the fist was down. That didn’t harm the basket details because the database is maintained independently. There is a synchronization between the master and the slaveDB. Once the master fails, the slave will be promoted as a new master. While this is happening self-healing is taking place.  The slave DB in the 2nd availability zone will it is a master with no slave and will create a slave in the 1st availability zone.  

By moving the database and creating a separate tier, different from the application, this makes sure you can individually administrate and have a granule control on your database design.  

What happened is the master and slave both fail at the same time? You can restore from back or failover to Disaster Recovery but this is not the answer I’m looking for. This is where replicas come into play. The R replica contain read requests, the W contains write requests.  Autoscaling cannot be perfomed on the database so we must pre-think this create as much replicas as we can. 

Projects with this architecture in mind will be built. For now see how much this design will cost using Amazon Web Services.  

Monthly pricing for this achitecture

EC2 Instance

There are 4 instances so x4 large Redhat servers with 2 disks each, thats 8 50GB SSD drives totaling 400GB, facilitating a total of 6 TB of data transfers per month sums to £673.44.   


Route 53 will be implemented. There will be 1 hosted zone with 1 traffic flow. Accompanied by DNS Failover Health Checks for endpoints. 



Relational Database (RDS). There will be 1 master and slave and 2 replicas. There will be a total of 4TB of data transfer taking place. 

Final cost

Jillian Scott

The total for this design as far as AWS is concerned is $8532.31

Close Menu