Part 1 of a two part blog post that shows you how we deploy MongoDB Clusters across different providers. Part 1 will cover Disks, Networks, Firewalls, and Tags. Part 2 will focus on network tunnels, routers and connecting VPCs.
As mentioned in a previous blog post, ElasticBox relies on MongoDB clusters internally for our data needs. In order to create redundancy and high availability for our production MongoDB databases, we deploy them in a VPC on AWS and Private Network on Google Compute Engine across different regions. For anyone who’s had the pleasure of trying to configure such a network, you’ll know that it’s no easy feat.
As we started to work on this scenario, we realized that there were better ways to utilize the available services and configurations from Google Cloud Platform and AWS. So today we’re releasing new features that enable full networking support for Google Compute Engine. Specifically, we’ve made changes to deployment profiles – our default tool for configuring infrastructure – to make better use of Tags, Firewalls Rules, and Routes. While we were at it, we also introduced the ability to specify disk size.
Here are some the production scenarios we have enabled with this update:
- Deploying machines with specific sets of rules for security purposes or to restrict public access (public IP address)
- Using either Tags or Firewalls for configuration purposes to make it easier to choose the right settings
- Specifying Ephemeral IP or IP forwarding configurations
- Connecting instances in VPCs or private networks (more coming soon)
To demonstrate the changes we’ve made to our deployment profiles to enable better use of Networks and Disks, we’ll be looking at our MongoDB Replica Box. Which, if you caught our previous post on ‘How To Deploy a MongoDB Cluster,’ you’ll know that the MongoDB Replica Box is a replica instance of the Master MongoDB Box.
Tags and Firewalls:
When deploying an instance of MongoDB on your GCE account, you want to enable certain types of traffic to that instance, and so you might use a specific firewall(s) or tag which you would then associate with that instance when deploying.
When configuring your deployment profile, you can choose which firewall you’d like to use by choosing either the firewall, which will then present the associated Tag, or by choosing the Tag instead. This gives you two options for specifying the network configurations for that instance.
In the case of Tags, we’re using what Google Compute called ‘targetTags’, defined as “A list of instance tags that specify which instances on the network can accept requests from the specified sources. If not specified, this firewall rule applies to all instances on this network.”
For Google Compute Engine, you can now use the ElasticBox Deployment profile to specify disk size. Simply turn on the disks feature in the Deployment Profile and specify the disk size you would like to use. You can also add multiple disks if you like. As a side note, Google recently announced the general availability of SSD persistent disks for all users and projects.
These upgrades to the ElasticBox deployment profile make it much easier to enable the complex scenarios of managing production instances on multiple providers, configuring networks and connecting VPCs. In the next blog post on this topic, we’ll be covering how to connect the MongoDB Cluster instances we have deployed in VPCs across multiple providers such as Google Cloud Platform, Amazon Web Services, and Microsoft Azure.