Automated Deployments
Azure DevOps and Docker (Part 1)
"Until code is in production, no value is actually being generated, because it’s merely WIP (work in progress) stuck in the system."
Ready to wave goodbye to manual deployments and embrace the magic of automation? Let’s face it, manually pushing code to production is about as fun as watching paint dry. It’s time-consuming, error-prone, and frankly, a bit of a drag. But what if I told you there’s a better way? A way to deploy your applications with the click of a button, freeing you up to focus on the things that really matter – like building awesome features and sipping your favorite coffee.
In this blog series, we’re diving headfirst into the world of automated deployments. We’ll explore the powerful duo of Azure DevOps and Docker, and show you how to create a seamless pipeline that takes your code from development to production in a flash. No more late-night deployments, no more frantic bug fixes, just smooth sailing all the way.
Before we dive in, it’s worth noting that automated deployments might not be essential for small applications, like a single standalone file. However, setting them up even for small projects is a wise practice. There are numerous benefits, and a crucial one is that it serves as a form of documentation. Other engineers can examine the system and understand the deployment process, reducing reliance on a single person or team. Additionally, it offers a comprehensive overview of the entire system, which I find invaluable when familiarizing myself with a new project.
I’ve worked on various microservices projects, from systems with just a couple of services to those with over 20 handling a single segment of a much larger system. When dealing with such large-scale projects, automated deployments become crucial for obvious reasons.
In this article, I’ll discuss a deployment workflow we can use to set up automated deployments. I’ll provide a high-level overview of the workflow, discuss its different components, and explain how to connect them to create the complete deployment process. I’ve used a Django application running on Docker, Azure DevOps for deployment management, and an AWS EC2 instance as the host. The host can be either a staging or production server. This brings us to another advantage of automated deployments – the ability to create multiple environments (like staging and production) for our application. Automated deployments help keep these environments as similar as possible, reducing the risk of deploying buggy code to production, which is always detrimental to the business.
To keep this article concise, I’ll assume readers are familiar with Docker, EC2, Azure DevOps, and the general concept of a CI/CD pipeline.
1.0 Components of the workflow
In this section, we’ll break down the various components of the workflow. We’ll delve into the role of each component, provide a general overview of the necessary configuration, and explore how each piece fits into the larger puzzle.
1.1 Local Development
Every system begins with a local development environment. In our case, the application we’ll deploy is a Django application, but familiarity with Django isn’t necessary for this article. We won’t delve into the Django application itself. That’s the beauty of Docker; as long as we can run an application within a Docker container, the deployment workflow is unlikely to fail. Thus, the fundamental unit of our local development environment is Docker.
This component consists of the following segments:
1. Application source code: The application source code contains the Django application.
2. Docker compose files: We have three Docker Compose files, each tailored for a different environment. Ideally, our application should use a single file to maintain consistency across environments. However, for simplicity, I’ve created three separate files since my local environment includes containers that aren’t needed in staging or production. For instance, in production, we might prefer a dedicated managed database server instead of running it within a Docker container.
3. Dockerfile: The component consists of a dockerfile that generates the Docker image by compiling our application source code.
version: "3.9" services: app: image: ${REGISTRY_SERVER}/application # container_name: application ports: - 127.0.0.1:${HOST_PORT}:${DOCKER_PORT} networks: - microservice_net extra_hosts: - "host.docker.internal:host-gateway" volumes: - ./app environment: - APP_SECRET=${APP_SECRET} - MODE=${MODE} - DEFAULT_INVITATION_EXPIRATION=4320 - ATTACH_DEBUGPY=${ATTACH_DEBUGPY} - REDIS_URL=redis - REDIS_PORT=${REDIS_DOCKER_PORT} - REDIS_PASSWORD=${REDIS_PASSWORD} - POSTGRES_PASSWORD=${POSTGRES_PASSWORD} - POSTGRES_USER=${POSTGRES_USER} - POSTGRES_DB_NAME=${POSTGRES_DATABASE} - POSTGRES_URL=postgres - POSTGRES_PORT=${POSTGRES_DOCKER_PORT} - POSTGRES_ENGINE=${DB_ENGINE} - SOLR_URL=${SOLR_URL} - SOLR_PORT=${SOLR_DOCKER_PORT} - SOLR_PATH=${SOLR_PATH} - MQ_USERNAME=${RABBITMQ_DEFAULT_USER} - MQ_PASSWORD=${RABBITMQ_DEFAULT_PASS} - MQ_URL=rabbitmq - MQ_PORT=${MQ_DOCKER_PORT} - EMAILSERVICE_QUEUE_NAME=${EMAILSERVICE_QUEUE_NAME} - SITE=${SITE} networks: microservice_net: external: true name: microservice_net
The code snippet showcases a sample Docker Compose file used to create my Docker container. I’ve chosen Docker Compose as my orchestration tool, but other options like Kubernetes are available. I’ve stored my environment variables in a separate file, enabling me to maintain a local copy and update them during deployment from the release pipeline.
One of the important section of the docker compose file is path of the image:
version: "3.9" services: app: image: ${REGISTRY_SERVER}/application
The image path, resembling ‘myusername/application’, points to a repository on Docker Hub. This introduces us to the second component of our workflow: the Deployment Package.
1.2 Deployment Package
A critical aspect of configuring automated deployments is versioning the deployments themselves. While we strive to keep our production application bug-free, maintaining a flawless codebase is challenging, particularly in large systems or when multiple developers contribute to the same codebase. Therefore, it’s vital to ensure that if a business-critical bug slips into production, we can roll back our changes and restore the application to a previous, stable state. This makes packaging our code and maintaining version control absolutely essential.
Git is a popular choice for versioning codebases, and I’ve utilized the Git repository provided by Azure DevOps to store my code. However, Docker empowers us to package not just the code but also the environment needed to execute it. Docker images are standardized packages containing all the necessary files, binaries, libraries, and configurations to run a container.
Consequently, versioning our deployments through Docker images is an excellent way to ensure our application can run on most devices capable of running Docker. Docker Hub is a popular choice for storing Docker images in repositories. We can create a repository on Docker Hub and store different versions of a Docker image. It also allows us to create private repositories to limit access to those packages. Therefore, I’ve chosen Docker Hub as my package repository.
There are other ways to version our deployments. For example, Azure DevOps offers the ability to store artifacts, which are similar to deployment packages and can be generated by the release pipeline after a successful build. We could even store the images in our own custom repository or use cloud storage like Amazon S3. However, this would require custom code to pull the images from these locations, copy them to the correct place on our application server, and rebuild the running Docker container.
During deployment, our Azure DevOps release pipeline will build our codebase, generate a Docker image with a unique version number, and push this image to our Docker Hub repository. In the second deployment stage, the release pipeline will access the EC2 instance and pull the specified Docker image for deployment. This means that in the event of a rollback, we can simply specify a stable version and let the deployment pipeline handle the rest. We can be confident that the stable version will work correctly because the Docker image encapsulates the required environment configuration.
The screenshots provided showcase two distinct tasks within the Azure DevOps release pipeline. This brings us to the next component of our workflow: Azure DevOps. This component functions like a deployment orchestration manager, overseeing the entire CI/CD workflow.
1.3 Azure DevOps
Azure Devops supports a collaborative culture and set of processes that bring together developers, project managers, and contributors to develop software. It allows organizations to create and improve products at a faster pace than they can with traditional software development approaches.
Azure DevOps is a vast topic in its own right, offering a suite of features that assist organizations in building and managing automated deployment workflows.
For our workflow, we’ll utilize Azure DevOps to accomplish the following tasks:
– Store our application code in an Azure Git repository: This centralizes our code and facilitates version control.
– Create a build pipeline for validating pull requests and performing necessary tests: This ensures that the application is free of bugs before merging changes.
– Create a build pipeline to package our application code and generate a proper Docker image: This pipeline will run whenever a pull request is completed. It will also push the generated image to Docker Hub for easy deployment.
– Create a release pipeline that gets triggered upon successful completion of the build pipeline for release: This pipeline will deploy the Docker image to all application servers, ensuring consistent deployment across the environment.
Among the numerous features available, let’s focus on the following for the purpose of this article:
1. DevOps Git Repository (Repos): It provides a Git repository to store the source code.
2. Build Pipeline: We have the flexibility to create multiple build pipelines for the same repository. Typically, I set up two build pipelines per repository. The first one is a ‘PR build pipeline,’ triggered whenever a developer creates a pull request or pushes new changes to an existing one. The second is a ‘build pipeline for release’ which is executed when a developer completes their pull request and merges the changes into the main branch. This second pipeline then triggers a release, performing the necessary tasks to deploy the changes to either the staging or production server.
3. Release Pipeline: A release pipeline comprises one or more stages and a trigger. Each stage consists of a collection of one or more tasks, typically configured to deploy the application to the application servers (or deployment group). A trigger is a predefined event that initiates the deployment process. One example of a trigger could be the successful completion of the “build pipeline for release.”
So, whenever a developer completes their pull request and merges the changes into the main branch, the “build pipeline for release” is triggered. It carries out all the necessary tasks to create, package, and push the Docker image to Docker Hub. Upon successful completion of this pipeline, the first stage of the release pipeline is triggered. The release pipeline then executes all the tasks needed to pull the Docker image onto each application server (sequentially) and rebuild the Docker containers.
4. Tasks and Variables: Each build or deployment pipeline consists of a series of tasks that need to be executed. These tasks are performed sequentially, and typically, a failure in any one task results in a rollback of the changes made during the build or deployment process. Each pipeline also includes a set of environment variables, whose values are injected into the application during deployment. This allows us to manage our environment variables from a central location and, importantly, keeps them separate from the main application for security purposes.
5. Deployment Groups: Each deployment group encompasses one or more deployment agents. These groups are beneficial when managing a cluster of servers hosting the same application for a specific environment. For instance, a production environment might have 10 servers, each running the same application, which is then used for load balancing. When deploying code to production, the same process needs to be executed on all servers. Moreover, a deployment failure on one server could signal an error that necessitates a rollback. Deployment groups simplify the management of these server clusters for administrators.
6. Deployment Agent: Each deployment agent signifies software running on each of the deployment servers mentioned earlier. When setting up a CI/CD pipeline, we need to install software provided by Azure DevOps on our servers where applications will be deployed. This software runs in the background and maintains a persistent connection with Azure DevOps. During deployment, DevOps communicates with the servers through this software.
7. Agent pool: An agent pool shares similarities with a deployment agent. Each agent pool is software running on a server, used to build the application and carry out the tasks specified in the build or release pipeline. Azure DevOps offers built-in agent pools that we can leverage to execute our pipeline. Ideally, each of these servers should be ephemeral (short-lived) and stateless, serving solely to execute the tasks defined in the pipeline.
1.4 Application Server
The application server, in this case, is a web server hosting our application. I’ve utilized an AWS EC2 instance running Ubuntu 22 as the server and configured it with:
– Nginx reverse proxy: To forward incoming requests to the Docker container running our application.
– Docker: To manage and run the Docker containers.
– Azure Agent: To establish a connection with the Azure DevOps deployment group. Installing the agent on a server is fairly easy. Azure DevOps provides us with a script that we can just run to install and start the software on all popular operating systems.
Ideally, when employing a CI/CD pipeline for deployment, we shouldn’t need to directly access the application server. The Azure agent running on the server handles the deployment process for us. However, occasional access might be required to debug issues or to update the PAT (Personal Access Token) used to authorize the Azure Agent installed on the server.
And that’s a wrap on Part 1!
We’ve successfully laid the groundwork, introducing the key players in our automated deployment workflow. But the real magic happens when these components work together seamlessly.
So, what’s next? In Part 2, we’ll roll up our sleeves and dive into the nitty-gritty of configuring this workflow. We’ll connect the dots, ensuring each component plays its part in the grand symphony of automation. But we won’t stop there! We’ll also explore ways to enhance our workflow, pushing the boundaries of automation even further. And of course, we’ll tackle the inevitable challenges head-on, armed with practical solutions to keep your deployments smooth sailing.
Get ready to level up your deployment game! Part 2 is coming soon, so stay tuned. In the meantime, keep those creative juices flowing and those automation dreams alive.
References
- 1. https://docs.docker.com/guides/docker-concepts/the-basics/what-is-an-image/
- 2. https://learn.microsoft.com/en-us/azure/devops/user-guide/what-is-azure-devops?view=azure-devops
- 3. https://www.amazon.com/Phoenix-Project-DevOps-Helping-Business/dp/0988262592
- 4. https://azure.microsoft.com/en-us/products/devops
- 5. https://www.docker.com/
- 6. https://hub.docker.com/
- 7. https://aws.amazon.com/ec2/
- 8. https://www.djangoproject.com/
- 9. https://git-scm.com/downloads
- 10. https://nginx.org/en/