More than one way to do Cloud Bursting


Cloud Bursting is a hot topic in cloud computing today. It’s a model that gauges an organization’s ability to use internal resources to host services critical to their business, and during demand spikes consume resources from public clouds on a pay-as-you-go approach.

Cloud bursting use cases

Today only a handful of businesses face real cloud bursting challenges because of their special use cases. They run the sort of applications where the burst is high due to compute-intensive processes like image processing, scientific computing, monthly calculations, and such or relate to running the development, test environments. Since the latter typically don’t involve client data, they are not subject to strict regulatory compliance, which means you can run them on any infrastructure.

Support model

To support a strong cloud bursting model, several parts must come together:

  • Shared network between the public clouds and the datacenter.
  • Automated and repeatable deployments to launch to the required clouds regardless of platform differences.
  • Single management console to consistently support and maintain all deployment artifacts.
  • Ability to specify the amount, ratio, and priority of cloud resources the applications can consume.
  • Ability to identify the load needs of an application and configure them into the tools that manage scaling.

Most cloud providers offer the first piece today, either as dedicated connections or as virtual network infrastructure.

For the second and third, a robust solution like ElasticBox can address the requirements. We enable automated and consistent deployments across different cloud platforms from a unified interface to manage deployment artifacts with a level of built-in IT governance.

Now, the question is, how to predictably detect demand spikes to scale resource consumption in public clouds?

One answer is to integrate ElasticBox with basic monitoring tools in the private cloud. You can achieve this in ElasticBox by building a simple prediction model in a box and defining the events to trigger public cloud deployments.

Implementing cloud bursting

Say there’s a cloud bursting use case where we want to use the public cloud. ElasticBox gives us two options:
Screen Shot 2015-03-10 at 3.00.19 PM

Option 1: Admin driven
As an admin, you use a monitoring tool like New Relic or AppDynamics to perform checks on infrastructure health and load. When the tool detects a threshold, you get an alert, and you manually deploy additional instances to a public cloud provider you choose. Then, you use the ElasticBox instance scheduler to scale back the number of deployed instances.

Option 2: Fully automated
Here you apply the same process as in option one from a monitoring perspective, but, in this case, an auto-scaling policy defines the parameters for minimum and maximum number of instances. When the policy triggers an alert, new instances launch, and after the demand spike subsides, the instances automatically scale back.

So what’s your use case for cloud bursting? How do you handle it today? Do you want to automate it end-to-end with the help of ElasticBox? Talk to us for a demo.

Hacker News

Categories: Cloud Application Management, Cloud Computing, DevOps, ElasticBox
Tags: , , , , ,