Continuous Delivery and Deployment (CD/CD) with Docker and Kubernetes

Learn how to set up continuous delivery and deployment in your DevOps pipeline using Docker and Kubernetes. Containerize your code, automate deployments, and achieve scalability.

Continuous Deployment

Welcome back to our DevOps Pipeline series! You’ve already set up Source Code Management (SCM), integrated Continuous Integration (CI) with Jenkins, and built a strong foundation with automated testing. (If not, be sure to catch up on SCM, CI, and automated testing before diving in!)

Today, we’re focusing on the next crucial stages: Continuous Delivery (CD) and Continuous Deployment (CD). These stages automate the process of packaging, shipping, and deploying your code, allowing you to release new features quickly and reliably. In this post, we’ll use Docker for containerizing your application and Kubernetes for deploying and orchestrating these containers efficiently.

Here’s what we’ll cover:

  • Building and containerizing your app with Docker.
  • Setting up a Kubernetes cluster to deploy your application.
  • Integrating Jenkins for automated deployment to your cluster.

Ready to make your code easily deployable and scalable? Let’s dive into Docker and Kubernetes!

Continuous Delivery (CD) is the practice of packaging your application code into deployable units and ensuring that these units are always in a deployable state. Once code passes all CI pipeline stages (such as builds and tests), it is ready for deployment at any time.

Continuous Deployment, on the other hand, extends CD by automatically deploying every change that passes all tests to a production environment, reducing manual intervention and allowing for rapid iterations.

With CD/CD, you can:

  • Release new features faster: Automate the deployment process for quicker time to market.
  • Ensure code stability: By consistently validating deployments, you maintain stability.
  • Facilitate team collaboration: Development, QA, and operations teams work more efficiently when delivery and deployment are automated.

Pro Tip: Aim to maintain small, incremental changes in your pipeline, making it easier to find and fix issues quickly.

Docker is the foundational tool for containerizing your application. Containers are isolated environments that package your application and its dependencies, ensuring it runs consistently across different environments.

2.1 Installing Docker

For Windows/Mac:

For Linux:

1. Update package manager and install Docker:

sudo apt update
sudo apt install docker.io -y

2. Start and enable Docker:

sudo systemctl start docker
sudo systemctl enable docker

Verify the installation by checking the Docker version:

docker --version

Pro Tip: If you encounter permission issues when running Docker commands, add your user to the Docker group:

sudo usermod -aG docker $USER

2.2 Writing a Dockerfile

A Dockerfile is a script of instructions to build a Docker image. Below is an example Dockerfile for a Node.js application:

Create a Dockerfile in your project root:

# Use an official Node.js image as the base
FROM node:14

# Set the working directory
WORKDIR /app

# Copy package.json and install dependencies
COPY package.json ./
RUN npm install

# Copy the rest of the code
COPY . .

# Expose the app’s port
EXPOSE 3000

# Run the application
CMD ["npm", "start"]
  • FROM node:14: Uses the official Node.js image as the base.
  • WORKDIR /app: Sets the working directory inside the container.
  • COPY package.json ./ and RUN npm install: Copies dependencies and installs them.
  • COPY . .: Copies the rest of the application code.
  • EXPOSE 3000: Exposes port 3000 (or whatever port your app runs on).
  • CMD [“npm”, “start”]: Specifies the default command to run your app.

2.3 Building and Running a Docker Container

1. Build the Docker image:

docker build -t my-app:1.0 .

This will create a Docker image tagged as my-app:1.0.

2. Run the Docker container:

docker run -d -p 3000:3000 --name my-app-container my-app:1.0
  • -d: Runs the container in detached mode.
  • -p 3000:3000: Maps the container’s port 3000 to port 3000 on your host.
  • --name my-app-container: Names the container.

3. Verify the container is running:

docker ps

Pro Tip: Use multi-stage builds in your Dockerfile to optimize image size and keep production images lean.

3.1 Setting Up a Kubernetes Cluster

Kubernetes is a powerful platform to deploy, scale, and manage your containerized applications. Here’s how to get started:

Local Development with Minikube:

1. Install Minikube:

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube

2. Start Minikube:

minikube start

Verify Minikube is running by listing the nodes:

kubectl get nodes

3.2 Creating a Kubernetes Deployment and Service

To deploy your app on Kubernetes, create a deployment and a service.

1. Create a Deployment File (deployment.yaml):

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: my-app:1.0
        ports:
        - containerPort: 3000

2. Create a Service File (service.yaml):

apiVersion: v1
kind: Service
metadata:
  name: my-app-service
spec:
  selector:
    app: my-app
  ports:
  - protocol: TCP
    port: 80
    targetPort: 3000
  type: NodePort

3. Deploy to Kubernetes:

kubectl apply -f deployment.yaml
kubectl apply -f service.yaml

4. Access Your App: Use Minikube to get the service URL:

minikube service my-app-service

3.3 Managing Your Kubernetes Application

1. Scaling your application:

kubectl scale deployment my-app-deployment --replicas=5

This changes the number of running replicas of your app.

2. Monitoring your pods:

kubectl get pods

Pro Tip: Use kubectl describe pod <pod-name> for detailed pod status, logs, and debugging information.

4.1 Adding Jenkins to Your CD/CD Pipeline

To automate your deployment to Kubernetes, update your Jenkinsfile to include stages for building, pushing, and deploying your Docker image.

Example Jenkinsfile:

pipeline {
    agent any

    stages {
        stage('Build Image') {
            steps {
                sh 'docker build -t my-app:1.0 .'
            }
        }
        stage('Push Image') {
            steps {
                withCredentials([usernamePassword(credentialsId: 'docker-hub-credentials', passwordVariable: 'PASSWORD', usernameVariable: 'USERNAME')]) {
                    sh 'echo $PASSWORD | docker login -u $USERNAME --password-stdin'
                    sh 'docker tag my-app:1.0 $USERNAME/my-app:1.0'
                    sh 'docker push $USERNAME/my-app:1.0'
                }
            }
        }
        stage('Deploy to Kubernetes') {
            steps {
                sh 'kubectl apply -f deployment.yaml'
                sh 'kubectl apply -f service.yaml'
            }
        }
    }
}
  • Build Image: Creates a Docker image from your Dockerfile.
  • Push Image: Pushes the image to a Docker registry (e.g., Docker Hub).
  • Deploy to Kubernetes: Uses kubectl to apply the Kubernetes configuration and deploy the container.

4.2 Deploying Containers Automatically to Kubernetes

Ensure Jenkins has access to both:

  • Docker Registry Credentials: Store credentials in Jenkins securely for pushing images.
  • Kubeconfig for Cluster Access: Ensure Jenkins can access your Kubernetes cluster by using a kubeconfig file.

Whenever code is pushed to your repository, Jenkins will automatically build, push, and deploy the new image.

  1. Use Lightweight Base Images: Optimize Docker images for faster builds and reduced attack surface.
  2. Automate Versioning: Tag Docker images with semantic versioning and Git commit hashes.
  3. Implement Monitoring and Alerts: Use tools like Prometheus and Grafana to monitor cluster health and performance.
  4. Implement Rollbacks: Use Kubernetes’ built-in rollback features to recover quickly from deployment failures.

With Docker and Kubernetes, you’ve taken a significant step towards Continuous Delivery and Deployment in your DevOps pipeline. You’ve learned how to containerize your app, deploy it to a Kubernetes cluster, and automate the entire process with Jenkins, ensuring faster and more reliable releases.

Next Up: In the final part of our series, we’ll cover how to monitor and log your DevOps pipeline using Prometheus and Grafana to track performance and ensure everything runs smoothly.

Let’s Keep the Deployment Conversation Going!

What challenges have you faced while deploying containers, or what tools do you prefer for CD/CD? Share your insights below! And if you found this post helpful, share it with your DevOps community! 🚀

Read “Monitoring and Logging Your DevOps Pipeline with Prometheus and Grafana” →


Discover more from Abdelrahman Algazzar

Subscribe to get the latest posts sent to your email.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top