top of page
Writer's pictureMentat Collective

Fix it Fix it Fix it: Automation is Scientific Magic Part 1

In the Beginning

Hello, and welcome to the first installment of Fix it Fix it Fix it. This blog series intends to capture the day to day life of the Mentat DevOps engineering team as we enjoy the struggles and excitement of automating the world. Our first blog post is titled, Fix it Fix it Fix it: Automation is Magic Part 1.

The majority of projects that we take on at Mentat are extremely difficult enterprise hybrid cloud problems. Projects such as “Mentat, how do we manage and monitor our global cloud deployment that is split between Azure, AWS, and VMware”. To problems such as, “Mentat, how do we install software intended for Linux on N number of Windows machines, and make that completely client agnostic, and how do we do we reliably deploy this full stack of services in under 20 minutes?”. This blog post is about the later problem.

We recently had a client approach us with an automation project that was “almost complete”. It went something like this; “I have an Azure Automation project with almost all of the code written, but I need some help tying everything together and I need to be able to repeatedly deploy this full stack to support a website.” It sounded like a quick automation win that would provide valuable experience for a few of our junior DevOps engineers. Almost all of the code was written, and we have a great deal of Azure expertise. This should be a breeze. As it turned out, not exactly.


Solution Tech Stack:

  • Windows Server 2016

  • Solr

  • Zookeeper

  • Mongo

  • PowerShell DSC

  • Chocolatey

  • Terraform

  • Jenkins

We dug into the code and quickly realized that the “just about completed” code was nowhere near functional. Solr, Mongo, and IIS did not install automatically, and the Azure ARM templates occasionally worked. And when the templates did deploy, most of the infrastructure was misconfigured. What to do? Well, we simply started over and developed an automated pipeline with a single click deployment.


In order to rapidly tackle the work at hand, we spun up a Jenkins server, integrated Jenkins with our internal Gitlab, and immediately began to recreate the ARM templates into Terraform.


Earth-Shaping

If you have never used Terraform before, I would highly recommend that you download Terraform and begin learning as fast as possible. Terraform will allow you to deliberately shape your cloud world to create a hospitable environment for your application to live and thrive. Ok, I took that a bit too far. On a more serious note, Terraform is an open source infrastructure and platform services deployment tool, capable of provisioning into just about every public and private cloud available. Ok, ok, it deploys to every cloud platform that humans are actually using.


Using Terraform, we were able to rapidly rewrite the ARM templates into modular infrastructure as code, capable of reliably deploying the entire Solr, Mongo, and IIS infrastructure stack within minutes.


To get started with Terraform head over to this link https://www.terraform.io, download Terraform, and review the Hashicorp provided reference of the specific cloud provider API that you need/want https://www.terraform.io/docs/providers/azurerm/index.html. Ok will give you a few minutes to download Terraform, and peruse the API documentation of the cloud provider of interest.


I will. Really, I will give you a few minutes. Try now. All set? Excellent! Now that you have Terraform on your local system, and you are a master of the (insert cloud here) API reference, let’s rustle up some Terraform details on how to deploy the Solr, Mongo, and IIS stack in Azure.


In the beginning there were basics, and, as it happens, the basics were there in the end as well. But we should ignore that type of deep thought for now, and focus on Terraform. Let’s first look at the files and directories that you will be working with. The first step is to create a directory structure with a root directory and a modules directory. When you are done, it will look something like this:


Terraform starts with a site.tf file that is kept in the root directory of where you are working. The site.tf file for this implementation includes basics of how to connect to Azure, and references to the modules that will be called and executed. The modules directory above is where we will stash the infrastructure code related to each stack that we want to build. Inside each module there are two files. First we have the main.thing1.tf file, which contains all code for the cloud provider references specific to the module. The second file is the variables.thing1.tf file, which, as you can imagine, stores the variables that are specific to the module that the file resides in.


At this point you may be thinking to yourself, “Jon why do I need your special directory structure, what are these modules, and why am I still reading this”. These are all great questions to be thinking about. I can help you with the first two, but the last question is entirely on you :). The fact is, you do not need to use my file naming conventions and directory structure at all. I use this structure, because it is simple and self explanatory. My root directory will store my site.tf file, and my modules directory is where all of my modules reside. Easy peezy lemon squeezy.


Now that the basic structure has been setup, let’s take a walk through some of the files. First up we have MongoDB. MongoDB is stored as a module in the modules directory. Remember from before when I said I use the structure because it is easy? Right, right. Modules go into the modules directory. Below is a snippet from the MongoDB module.



resource “azurerm_network_interface” “mongo_nics” {
  count               = “${var.mongo_config[“vm_count”]}”
  name                = “${format(“%s-%02d”, var.mongo_nics[“name”], count.index + 1)}”
  location            = “${var.location}”
  resource_group_name = “${var.azurerm_resource_group[“name”]}”
 
  ip_configuration {
    name                          = “${format(“%s-%0d”, var.mongo_nics[“ip_configuration_name”], count.index + 1)}”
    subnet_id                     = “${var.mongo_subnet_id}”
    private_ip_address_allocation = “${var.mongo_nics[“private_ip_address_allocation”]}”
  }
}
 
resource “azurerm_managed_disk” “mongo_vm_disks” {
  resource_group_name  = “${var.azurerm_resource_group[“name”]}”
  location             = “${var.location}”
  name                 = “${var.mongo_vm_disks[“name”]}”
  storage_account_type = “${var.mongo_vm_disks[“storage_account_type”]}”
  disk_size_gb         = “${var.mongo_vm_disks[“disk_size_gb”]}”
  create_option        = “${var.mongo_vm_disks[“create_option”]}”
}
 
resource “azurerm_virtual_machine” “mongo_vm” {
  resource_group_name              = “${var.azurerm_resource_group[“name”]}”
  location                         = “${var.location}”
  count                            = “${var.mongo_config[“vm_count”]}”
  name                             = “${format(“%s-%02d”, var.mongo_vm[“name”], count.index + 1)}”
  network_interface_ids            = [“${element(azurerm_network_interface.mongo_nics.*.id, count.index)}”]
  vm_size                          = “${var.mongo_vm[“vm_size”]}”
  delete_os_disk_on_termination    = true
  delete_data_disks_on_termination = true
 
  storage_image_reference {
    publisher = “${var.mongo_vm[“storage_image_reference_publisher”]}”
    offer     = “${var.mongo_vm[“storage_image_reference_offer”]}”
    sku      = “${var.mongo_vm[“storage_image_reference_sku”]}”
    version   = “${var.mongo_vm[“storage_image_reference_version”]}”
  }
 
  storage_os_disk {
    name              = “${format(“%s-%0d”, var.mongo_vm[“storage_os_disk_name”], count.index + 1)}”
    caching           = “${var.mongo_vm[“storage_os_disk_caching”]}”
    create_option     = “${var.mongo_vm[“storage_os_disk_create_option”]}”
    managed_disk_type = “${var.mongo_vm[“storage_os_disk_managed_disk_type”]}”
  }
 
  os_profile {
    computer_name  = “${format(“%s-%0d”, var.mongo_vm[“os_profile_computer_name”], count.index + 1)}”
    admin_username = “${var.mongo_vm[“os_profile_admin_username”]}”
    admin_password = “${var.mongo_vm[“os_profile_admin_password”]}”
  }
  tags {
    environment = “Dev”
  }
}

To create resources in Terraform, all you really need to do is reference the API documentation, and understand a few basic things. First, resource “azurerm_network_interface” “mongo_nics” {} is an example of how we created all of the network interface cards for the MongoDB implementation. The word “resource” tells Terraform that you would like to build a particular cloud resource. The Everything between the {} define the various parts of the NICs resource that need/should be defined to create an Azure VM NIC. There is a lot going on here, and frankly, too much to adequately cover in detail in this blog. That said, check back with the Mentat blog in the next couple of weeks when we plan to release a focused blog on working with Terraform.


Now that we have reviewed what Terraform is, the directory breakdown, and we have a good handle on what each of the files mean, we should take a look at how we build things with Terraform.


There are three basic commands that you will need to know in order to get cloud infrastructure running:


Terraform Get → Terraform Get is used to download and update modules referenced in the root directory and called from your main.tf file. You should run this each time that code is updated.


Terraform Plan → Creates an execution plan for your Terraform infrastructure, and provides a basic syntax error checking.


Terraform Apply → Builds the infrastructure, and generates a State file.

And that is it. One of the many great things about this setup is that we can simply copy the MongoDB module into the other two modules, and simply change the names and update a few things that will be specific to each stack, and we are all done with our base Terraform code.


With the base infrastructure as code working, we can now commit the code to Git and Jenkins will automatically run a test build! Wait what?? Time to get groovy with Jenkins.


Groovy Cloud Duct Tape

Jenkins does not require any custom code, unless you want to write your own Jenkins files, it is open source (free) and well supported. Jenkins is really the duct tape that can bind your complex environments, systems, and business processes together. Consequently, we chose to rapidly spin-up a Jenkins Master in our Azure account to run tests, and Slack us when jobs complete.


To get started with Jenkins, head over to the Azure portal, open the marketplace, and search Jenkins. Yes, there is actually a Jenkins image on the Azure Marketplace and it works well as a quick POC or dev/test server. Once your Jenkins server is up, the first thing to do is create your first pipeline and integrate that pipeline with your source control. In our case we used an internal Gitlab, and selected the working dev branch of the Solr, Zookeeper, MongoDB, and IIS stack:


Once you have integrated your source control to Jenkins, you can create a something like the pretty pipeline below, which represents our Jenkins build:


This pipeline nicely lays out the 5 simple steps to run an end to end deployment of our Terraform code, and destruction of our infrastructure on success.


And that is really about it. Git gets hooked up to Jenkins, Jenkins runs the Terraform code. Done. Well, not really. What we were able to accomplish in this short blog post was to create the baseline infrastructure to support Solr, MongoDB, and IIS. We still have to configure all all of these servers with their respective services. “How pray tell do you intend to accomplish that?” Why thank you for asking. Part 2 of our first Fix It Fix It Fix It blog series, will go into detail on installing Solr, MongoDB, and IIS with PowerShell DSC and Chocolatey.


P.S.

This post was originally released in 2018. So please excuse our messy Terraform syntax. :)

153 views0 comments

Comments


bottom of page