Skip to main content
  1. Posts/

Deploying a Go Microservice in Kubernetes

·1427 words·7 mins·
Application Development

Most of my experience with web applications is with monoliths deployed to a PaaS or using configuration management tools to traditional servers. Last week, I submerged myself in a different paradigm, the one of microservices. In this post, I’m going to share what I learned by deploying a Go application on top of Kubernetes.

To follow along, you’ll need to have Go, Docker, and Kubernetes installed in your system. I’d recommend using something like K3d and K3s to install Kubernetes on your machine if you don’t have access to one already.

The hello server
#

First, we’ll start by writing an elementary microservice. Let’s create a web server that responds with Hello and whatever we passed as the path of the URL.

In a clean directory, we initialize and tidy Go. You can replace my GitHub username, mauromorales, for yours.

go mod init github.com/mauromorales/hello-server
go mod tidy

Create the file main.go which will describe our microservice.

//A simple web server that responds with "Hello " and whatever you pass as the
//path of the URL. It also logs the requests
package main

import (
	"fmt"
	"log"
	"net/http"
)

func Log(handler http.Handler) http.Handler {
	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		log.Printf("%s %s %s", r.RemoteAddr, r.Method, r.URL)
		handler.ServeHTTP(w, r)
	})
}

func main() {
	http.HandleFunc("/", HelloServer)
	http.ListenAndServe(":8080", Log(http.DefaultServeMux))
}

func HelloServer(w http.ResponseWriter, r *http.Request) {
	fmt.Fprintf(w, "Hello, %s!\n", r.URL.Path[1:])
}

To test everything is working properly, run the microservice with

go run main.go

and on a different terminal make a web request

% curl http://localhost:8080/Go
Hello, Go!

And you should also see a log for the request on the terminal running the hello-server

% go run main.go 
2022/12/30 11:42:42 127.0.0.1:59068 GET /Go

A container version
#

Kubernetes might be written in Go, but it wouldn’t understand how to deal with our hello-server. The minimal unit of processing in K8s is the container, so let’s put our code into one. In a file called Dockerfile add the following content:

FROM golang as builder

WORKDIR /app/

COPY . .

RUN CGO_ENABLED=0 go build -o hello-server /app/main.go

FROM alpine:latest

WORKDIR /app

COPY --from=builder /app/ /app/

EXPOSE 8080

CMD ./hello-server

Let’s first build an image

docker build -t docker.io/mauromorales/hello-server:0.1.1 .

When it’s done building, it will show up as one of your images

% docker images
REPOSITORY                     TAG            IMAGE ID       CREATED         SIZE
mauromorales/hello-server      0.1.1          3960783c4afe   36 seconds ago   19.8MB

So let’s run it

docker run --rm -p 8080:8080 mauromorales/hello-server:0.1.1

And in a different terminal test that it’s still working as expected

% curl http://localhost:8080/Docker
Hello, Docker!

Looking back at the running container, we see that again our request was logged

% docker run --rm -p 8080:8080 mauromorales/hello-server:0.1.1
2022/12/30 10:48:57 172.17.0.1:58986 GET /Docker

Deploying hello-server to Kubernetes
#

Let’s begin by uploading the image we built in the last step, to the docker hub. For which you need an account.

docker login --username mauromorales

And once logged in, you can push the image

docker push mauromorales/hello-server:0.1.1

This process will be in iterations to understand the different components in K8S

Pods
#

Initially, we will only deploy a pod (a grouping mechanism for containers) of one single container. To achieve this, we create a file called pod.yaml add the following definition:

apiVersion: v1
kind: Pod
metadata:
  name: hello-server
  labels:
    app: hello-server
spec:
  containers:
    - name: hello-server
      image: mauromorales/hello-server:0.1.1
      imagePullPolicy: Always
      ports:
        - name: http
          containerPort: 5000
          protocol: TCP

And apply it:

% kubectl apply -f pod.yaml
pod/hello-server created

You should now see it listed:

% kubectl get pods         
NAME                         READY   STATUS    RESTARTS   AGE
hello-server                 1/1     Running   0          111s

While the pod is running in the background, we need to forward the port to access it:

kubectl port-forward pod/hello-server 8080:8080

Now you can test again that it is working, by curling to it from a different terminal.

% curl http://localhost:8080/Pod
Hello, Pod!

But if you go back to the port forwarding, you will not see any logs. All you see are the logs of the port-forward command.

% kubectl port-forward pod/hello-server 8080:8080
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
Handling connection for 8080

To read the pod logs, we require kubectl.

% kubectl logs pod/hello-server 
2022/12/30 10:59:56 127.0.0.1:51866 GET /Pod

Services
#

So far so good, but unfortunately a pod is not really implementing the microservice pattern. If the pod is restarted, it might lose its IP. For our little example this is not a problem, but if we were to have more than one microservice talking to each other, we would need to find a way to share the new IPs between the different microservices. Thankfully, Kubernetes comes with a solution to this issue, services.

Let’s write one inside service.yaml

apiVersion: v1
kind: Service
metadata:
  name: hello-server-svc
spec:
  selector:
    app: hello-server
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080

Now, apply the service:

% kubectl apply -f service.yaml 
service/hello-server-svc created

And as usual, we do some port forwarding, but this time to the service instead of the pod:

kubectl port-forward service/hello-server-svc 8080:8080

Let’s test it in the second terminal

% curl http://localhost:8080/Service
Hello, Service!

And look at the service logs

% kubectl logs service/hello-server-svc
2022/12/30 11:15:00 127.0.0.1:39346 GET /Service

Deployments
#

This is looking much better now, if I wanted to scale this service, all I’d need to do is to create another pod, with the hello-server label. But this would be very tedious and error-prone. Thankfully, Kubernetes gives us deployments, which handle that for us and gives us deployment strategies. Let us then create a deployment with three replicas.

First, we need to delete the pod we created.

% kubectl delete pod/hello-server
pod "hello-server" deleted

And in a file called deployment.yaml add the following description:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-server
  labels:
    app: hello-server
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hello-server
  template:
    metadata:
      labels:
        app: hello-server 
    spec:
      containers:
      - name: hello-server
        imagePullPolicy: Always
        image: mauromorales/hello-server:0.1.1
        ports:
        - containerPort: 8080

and apply it

kubectl apply -f deployment.yaml

When finished you should be able to list the generated pods

% kubectl get pods
NAME                           READY   STATUS    RESTARTS   AGE
hello-server                    1/1     Running   0             7m13s
hello-server-5c7c6f798f-hp99p   1/1     Running   0             13s
hello-server-5c7c6f798f-f2b4c   1/1     Running   0             13s
hello-server-5c7c6f798f-2fxdm   1/1     Running   0             13s

The first pod, is the one we created manually, and the next three are the ones the deployment created for us.

We start forwarding traffic to the deployment:

kubectl port-forward service/hello-server-svc 8080:8080

And test it out

% curl http://localhost:8080/Deployment
Hello, Deployment!

Let us also check the logs:

% kubectl logs deployment/hello-server
Found 3 pods, using pod/hello-server-5c7c6f798f-hp99p
2022/12/30 11:29:47 127.0.0.1:46420 GET /Deployment

Port forwarding is nice, but at its current state will only map to one replica, which is less than ideal. In order to load-balance our service, we need to add an ingress rule.

Create a file ingres.yaml with the following content

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: hello-server
  annotations:
    ingress.kubernetes.io/rewrite-target: /
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: hello-server-svc
            port:
              number: 8080

And apply it

% kubectl apply -f ingress.yaml 
ingress.networking.k8s.io/hello-server created

As you probably expect it, we need to forward traffic, however this time instead of forwarding to our service, we forward to the traefik service (served on port 80):

kubectl port-forward -n kube-system service/traefik 8080:80

Let’s test it out by sending 20 requests this time

% for i in `seq 1 20`; do curl http://localhost:8080/Traeffic; done
...
Hello, Traeffic!

And have a look at the logs

% kubectl logs deployment/hello-server
Found 3 pods, using pod/hello-server-5c7c6f798f-hp99p
2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic

And we can also check the individual logs of each pod

% kubectl logs pod/hello-server-5c7c6f798f-hp99p --since=4m
2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
seneca% kubectl logs pod/hello-server-5c7c6f798f-f2b4c --since=4m
2022/12/30 11:43:14 10.42.0.22:59596 GET /Traeffic
2022/12/30 11:43:14 10.42.0.22:59596 GET /Traeffic
2022/12/30 11:43:14 10.42.0.22:59596 GET /Traeffic
2022/12/30 11:43:14 10.42.0.22:59596 GET /Traeffic
2022/12/30 11:43:14 10.42.0.22:59596 GET /Traeffic
2022/12/30 11:43:14 10.42.0.22:59596 GET /Traeffic
seneca% kubectl logs pod/hello-server-5c7c6f798f-2fxdm --since=4m
2022/12/30 11:43:14 10.42.0.22:40840 GET /Traeffic
2022/12/30 11:43:14 10.42.0.22:40840 GET /Traeffic
2022/12/30 11:43:14 10.42.0.22:40840 GET /Traeffic
2022/12/30 11:43:14 10.42.0.22:40840 GET /Traeffic
2022/12/30 11:43:14 10.42.0.22:40840 GET /Traeffic
2022/12/30 11:43:14 10.42.0.22:40840 GET /Traeffic
2022/12/30 11:43:14 10.42.0.22:40840 GET /Traeffic

And voilà, you can see how it was properly distributed between our 3 replicas.

I hope you had as much fun as I did playing with Kubernetes.

Reply by Email

Related

Rails Routing: Advanced Constraints for User Authentication without Devise
·431 words·3 mins
Application Development

Many times we mount engines and restrict access to admin users via Devise. In this post, I’ll show you how to do the same when using a different authentication mechanism.

JSON Data Type
·450 words·3 mins
Application Development

Whenever we save data from one of our Rails models, each attribute is mapped one to one to a field in the database. These fields are generally of a simple type, like a string or an integer. However, it’s also possible to save an entire data object in JSON format in a field. Let’s see an example of how to do this from a Ruby on Rails application.

ActiveRecord Except
·375 words·2 mins
Application Development

August 19 was Whyday, and to commemorate it, I decided to write a gem called activerecord-except.