As we move from traditional applications into the cloud world we’re seeing differences in how they are written. In general, we are moving away from a slow, and sluggish world of traditional applications to one which is smarter, faster and more agile. The results shine through in customer experience and speed of innovation.
Both traditional and cloud native applications make use of load balancers, but they differ significantly when and where they come in to play. Traditional applications are written in what’s known as a stateful manner. Users hit a balancer as they arrive and are redirected to the server. In a traditional environment, you will use server sticky which keeps users tied to a specific manner. Each server has its own individual state with the user so we have to keep sending them to that server, but if it dies the load balancer will direct traffic to a surviving server but the cart will be empty.
So, let’s imagine you’re buying something on Amazon and you’ve filled up your shopping cart. That cart only exists on the single server and if something were to happen to that server, you’d lose all your items. It adds time, creates frustration and reduces the chances that you will actually complete a purchase.
If you log into a website – if you were sent to a second server you’d have to log in again. You can’t dynamically adjust the number of servers you have running at any given time, but with cloud native, you can. This works on stateless apps or shared state applications. Their load balancers don’t need to be as sophisticated. They need no server sticky so a user arrives from the web, hits the load balancer and that user request can be sent to any server because they have a shared state. If a user puts an item in a cart, it gets added to the other servers, which means a smoother and more positive user experience.
This is a better way for a couple of reasons: When Amazon came out the virtual machines, for instance, were on shaky ground. We need servers to be stable and have the resources they need. The virtual machine and application are tightly intertwined. Cloud native, the infrastructure is unstable so you must build your application in a way that does not rely on virtual machines. We rely less on virtual motion, so if any servers were to die the load balancer sends traffic to surviving servers and the users lose nothing.
In a traditional application if you have three web servers, and a load balancer, and there is a sudden surge in demand, you’ll need to add more servers. Unfortunately, until you have that surge the load balancers should produce 33% to each server and it will take a while for them to ramp out because only new users will be sent to these applications. If you have sticky turned on the existing users will be sent to the existing servers. Existing users will not be transferred until you kill off old servers which will result in a negative experience. When the surge in demand vanishes, you will have to cull servers which can, again, result in a negative user experience.
Cloud native uses a shared state so there is no need for stickies at all. So if you have an uptick in demand and add three more servers, traffic is divided equally among them all and if you have a drop in demand you can kill them with users being sent to remaining servers without any interruption to the user experience.
The nature of cloud native applications is to facilitate DevOps by allowing a combination of people, processes, and tools to deliver a close collaboration between all parties. It’s a faster and smoother transition towards the transfer of finished application code into production. Traditional applications, on the other hand, are slow and siloed into separate places. The priorities of your own organizational structure take precedence over the final value for your customers resulting – once again – in a negative user experience.
Backup and continuous delivery
Traditional applications will receive updates and developments but these can be separated by several months and when they are introduced they can delay or interrupt processes and services to your key customers. Inevitably, important updates will take you offline for a period of time which not only impacts the user experience but sees you missing out on countless revenue generating opportunities.
The cloud offers the ability to deliver continuous operations. Updates become available as soon as possible, making for a smoother developmental curve which ensures you always have the latest functionality available. Updates result in no interruption for existing services which means you can accelerate development, maintain a tighter feedback loop and continue to cater to your customers’ needs.
Backup on the cloud is automated. The container orchestrator provides a dynamic, high-density virtualization on top of your VM matching microservices. Clusters are placed by the orchestration across a cluster of VMs to provide elastic scaling and recovery. Traditional applications offer poor backup capabilities and mistakes can be legion. You can refactor a traditional application by changing the code without changing the infrastructure of the site and move it into the cloud native infrastructure, but if you’re developing new applications you should start cloud native.
Most firms will support the cloud native infrastructure. In short, the difference between could native and traditional applications is a difference between agility, flexibility and the ability to collaborate and a more siloed infrastructure, which can be slow to upgrade and develop.
Check out our latest whitepaper – Cloud-Native IT Transformation: Building Apps Faster and Better