AWS Auto Scaling and Load Balancing Made Easy


Take advantage of automatically scaling and load balancing instances when you deploy applications using ElasticBox in AWS EC2 or VPCs. Load balancing evenly distributes load to application instances in all availability zones in a region while auto scaling makes sure instances scale up or down depending on the load. 

Why load balance and auto scale at the same time?

Paired together, auto scaling and load balancing provide useful benefits. Say you want to smoothly handle traffic surges to your website. When load increases, you want the website infrastructure to have enough capacity to serve the traffic. During bouts of low activity, naturally you want to reduce capacity.

With load balancing alone, you’ll have to know ahead of time how much capacity you need so you can keep additional instances running and registered with the load balancer to serve higher loads. Or you could manually stop worrying about it and auto scale based on say CPU usage so that instances increase or decrease dynamically based on the load. Now this should give you a good idea of why it makes sense to have both.


How to easily set them up in ElasticBox?

If you were to set this up directly in AWS, you’d have to set up load balancing and auto scaling individually. The way it works is first you create a load balancer, attach it to an instance, and then associate the load balancer to an auto scaling group.

In ElasticBox, we make this process easier by letting you select them right when you launch an instance. Essentially, you do both in one go from the deployment profile:


  • Add a new load balancing listener or an existing one. Since you upgrade or replace application instances more often than load balancers, we recommend you reuse existing load balancers in production environments to retain DNS settings that route traffic to the instances.
  • Select the listener and instance protocol and ports. Clients communicate with the instance via the load balancer based on protocols and ports you set in the deployment profile. To allow traffic over HTTPS, SSL, you must upload a certificate to AWS beforehand.
  • Add auto scaling. To enable, simply turn it on. We make sure instances scale within the instance limit you set in the deployment profile.

What happens next?

Behind the scenes, via AWS, we register the instance to a new or existing load balancer and automatically create a security group or reuse an existing one you chose in the deployment profile.

If deploying in a VPC, we launch the load balancer and the instance in the same VPC subnet. As long as the VPC subnet has an internet gateway, clients should be able to communicate to the instance via the load balancer from the internet.

To auto scale, we create a launch configuration and auto scaling group, and associate with the load balancer and the existing security group.

Where the magic happens

If the instance usage goes up to 80% CPU load, the auto scaling group in AWS spawns up a new instance and the load balancer attached to the group automatically diverts client traffic to it. When the load goes down, AWS automatically scales the instances down. In all, you virtually experience no downtime.

There you have it! In one simple ElasticBox step, you get load balancing and auto scaling to work automatically for your deployments in AWS. So why not give it a shot today?

Hacker News

Categories: AWS, ElasticBox, News
Tags: , , ,
  • Travis Johnston

    how do i use load balancing with >50 docker microservice api endpoints? Seems overkill to spin up a $20/mth AWS LB per microservice. Any ideas?

    • Loadbalancer

      Apologies for adding a blatant link to an old post but it is (fairly) relevant: Ben our cloud guy has just written a blog about integrating load balancing and autoscaling. ElasticBox looks interesting I might have a little play…

  • Ignacio Fuertes

    Hi Travis,

    If you have one microservice per instance, you don’t need 50 AWS LB. You only need one per region in which you want deploy.

    On the other hand, if you want to have 50 microservices in one instance, you cannot use an AWS LB to balance the traffic between microservices. In that case, you will need an nginx or HAproxy to do this port forwarding inside of that machine.

    Let us know if you have any other question, you can reach us at