Mauro Morales

software developer

Author: Mauro

  • My Plans for FOSDEM 2024

    Kairos

    Just like in 2023, one of my proposed talks got accepted. However, I will not be delivering it. Instead, my colleague Dimitris Karakasilis will be doing it because he has more experience on the topic. I will still be there taking pictures and giving away some Kairos swag, in case you want yours 😉

    Also prepare for our first in-person Kairos meeting. If you’re using Kairos, please join us, we would be thrilled to know some of our community members. You don’t have to be using Kairos to be welcome, this is also a good time to talk Kairos in general. We will share about the details of the meeting on our Slack, Matrix and Mastodon accounts, where you can also reach us at any moment.

    We also applied for a panel about Special Purpose Operating Systems, together with Flatcar, Bottlerocket and Unikraft. Unfortunately we haven’t heard a reply yet. So stay in tune for this one, we will also announce on one of our channels.

    On Saturday morning we want to go for a run, so if you’re looking for a running group, please reach out. And at night we will also probably join a party or hang out in one of the great cafés in Brussels. So you have many chances to join us to talk Kairos, running, or whatever.

    MNT Reform

    I’m a big fan of my MNT Reform open-hardware laptop. I’ll bring it so you can play with it if you’re interested. And hopefully, I’ll get to meet with other MNT Reformers. You can reach out via my personal Mastodon.

    Talks I plan to attend

    There’s never enough time to attend all the talks I’d like to during FOSDEM. These are the ones that caught my attention without too much overlapping.

  • Reading Binary Files

    Some files in a computer system are written for humans and contain text.

    % file /etc/hosts
    /etc/hosts: ASCII text

    But many other files are made for the computer to execute, and it isn’t possible to read them using a tool like cat.

    % cat /bin/ls | head
    ����@�
          ��Z������
    
    

    This is because they are binary files

    % file /bin/ls
    /bin/ls: Mach-O universal binary with 2 architectures: [x86_64:Mach-O 64-bit executable x86_64] [arm64e:Mach-O 64-bit executable arm64e]
    /bin/ls (for architecture x86_64):    Mach-O 64-bit executable x86_64
    /bin/ls (for architecture arm64e):    Mach-O 64-bit executable arm64e

    However, it is possible to read them using a tool like hexdump

    hexdump -C /bin/ls | head
    00000000  ca fe ba be 00 00 00 02  01 00 00 07 00 00 00 03  |................|
    00000010  00 00 40 00 00 01 1c c0  00 00 00 0e 01 00 00 0c  |..@.............|
    00000020  80 00 00 02 00 01 80 00  00 01 5a f0 00 00 00 0e  |..........Z.....|
    00000030  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|

    The left letter of each pair is the high 4 bits and the second letter the lower 4 bits. Not all bytes represent a visible character, so I’m going to take 40, which represents the @ symbol. When split, the hexadecimal 4 can be represented as 0100 in binary and 0 as 0000. Merged back together forms the binary number 01000000, or 64 in decimal. We can validate this on an ASCII table like the one below.

    DECHEXBINASCII Symbol
    633F00111111?
    644001000000@
    654101000001A
    Table source: https://www.ascii-code.com/
    stateDiagram-v2
        40 --> 4
        40 --> 0
        4 --> 0100
        0 --> 0000
        0100 --> 01000000
        0000 --> 01000000
        01000000 --> 64
        64

    Hexdumpje

    To understand better how this works, I wrote a basic version of hexdump. The source code can be found on https://github.com/mauromorales/hexdumpje

  • My Personal Experience Using AI

    There’s been a gigantic buzz around AI for a while now. Unless you’re living under a rock, it’s hard not to get hit by this topic. So, a month or two back, I decided to finally give it an honest shot and see if AI can bring any benefits to my work or personal life.

    Disclaimer: No AI assistant was used to write this blog post.

    Some colleagues have been using GitHub’s Copilot since the beta release and swear by it, and other colleagues say that OpenAI’s ChatGPT has become part of their daily flow, so I decided to try both.

    GitHub’s Copilot for code generation

    Context for AI is crucial, this is because AI models are trained based on datasets. The quantity and quality of such data, plus the given training algorithms, will result in the quality of the model, and different models will be better at different tasks. GitHub’s Copilot is designed to generate code, and it was trained with code they host on GitHub.

    At the time of testing Copilot, my main project is Kairos, an OS (and more) for Edge Kubernetes. Kairos specific software is written in Go, but like any other Linux distribution it’s full of configuration files, scripts and build tooling. Some of them are Dockerfiles to build images, configuration files for different utilities (e.g., GRUB), an Earthfile for building everything together, tons of YAML files and different shell scripts to do stuff here and there.

    I use IntelliJ’s Goland IDE, where you can effortlessly install and configure the Copilot plugin. However, the quality of the suggestions was terrible. Usually, there were no suggestions at all. It’s been only a couple of times when I was impressed with the generated code, but for my particular use case, I wouldn’t consider it worth the 10USD/month or 100USD/year price tag.

    To be honest, I was very disappointed. The experience is not at all what I was told, but there are many variables that could be causing this gap. For once, those who recommended it, all use VSCode as their main editor, and since it’s the editor GitHub uses in their workspaces it wouldn’t be a surprise that it is much better supported. Second, the type of project and combination of languages is different, but I would expect Go to be quite well-supported. I also don’t really know how well has been Copilot trained to make recommendations on how to configure GRUB, for example.

    But don’t let my words steer you away from trying it. See for yourself how well it works with your stack and setup, and then decide if the subscription is worth it for you. As for me, I will give it a shot using VSCode and if I don’t see a major improvement, then I will cancel.

    ChatGPT as a review partner

    I’m well aware that ChatGPT can be used for code generation, but I decided to only use it for other work related tasks. Like, for example, asking it to help me understand code, help me debug issues, or help me with technical writing.

    Case 1: Explaining code

    Digging through some code, I came across the following. Most of it was clear to me but, I didn’t understand the need for doing the last line.

    Prompt

    Can you explain the following Go code?

    type ImageExtractor interface {
    	ExtractImage(imageRef, destination, platformRef string, local bool) error
    }
    
    type OCIImageExtractor struct{}
    
    var _ ImageExtractor = OCIImageExtractor{}

    Response

    This Go code defines an interface called ImageExtractor, which has one method called ExtractImage. The method takes four arguments: imageRef (a string representing the reference to an image), destination (a string representing the destination path for the extracted image), platformRef (a string representing the reference to the platform the image is built for), and local (a boolean indicating whether the image should be extracted from a local or remote source). The method returns an error if any error occurs during the extraction process.

    The code also defines a struct called OCIImageExtractor. This struct does not have any fields or methods of its own.

    Finally, the code creates a variable of type ImageExtractor that is initialized with an instance of the OCIImageExtractor struct. This line of code uses the blank identifier (_) to indicate that we don’t actually need to reference this variable, we just want to ensure that OCIImageExtractor implements the ImageExtractor interface. This is a way of enforcing that any other implementation of ImageExtractor will also have a ExtractImage method with the same signature.

    I find this super useful. It feels like reaching to the dev at the desk next to me and asking them: “hey do you know what does this line here does?”. You cannot do this with search engines and have much success.

    Case 2: Explaining other technical topics

    Now it’s time to reach to the other side of my desk to that fantastic sysadmin.

    Prompt

    Why do I have some files called dm-0, dm-1, dm-2 under the /dev directory?

    Response

    The files you are referring to, dm-0, dm-1, dm-2, are device mapper (DM) devices. Device mapper is a Linux kernel framework that provides a generic way to create virtual block devices by mapping physical block devices onto them. It is commonly used to create logical volumes for managing storage in Linux.

    When you create a logical volume using LVM (Logical Volume Manager), the LVM creates a device mapper device file that represents the logical volume. The dm-X device files are the device mapper devices that represent the logical volumes.

    For example, if you create a logical volume named “my_lv” using LVM, a device mapper device file named “/dev/dm-0” will be created, which represents the “my_lv” logical volume. If you create another logical volume named “your_lv”, a device mapper device file named “/dev/dm-1” will be created to represent the “your_lv” logical volume, and so on.

    You can use these device mapper devices just like any other block devices, such as hard disk partitions or USB drives, to store and retrieve data. However, it’s important to note that the underlying physical devices that the logical volumes are mapped onto may change, so it’s important to use the logical volume names (e.g. “my_lv” or “your_lv”) rather than the device mapper device names (e.g. “/dev/dm-0” or “/dev/dm-1”) when referencing the logical volumes in commands or scripts.

    Thanks to this, I was able to sort out a problem I was having building some Kairos images and all I had was a hunch that it was related to those files.

    Case 3: Writing a call for paper for a conference

    During a pairing session with a colleague, we decided to use ChatGPT to help us write a call for paper for a conference. I will not post the prompt or result here, but it suffices to say that we were able to use about 50% of the generated text. While 50% might not be such a great result for a 3-5 paragraph text, it made the task less exhausting. Specially as a non-native English speaker, I find it useful to have some sample text and base my work from that.

    All in all, I would highly recommend that you start integrating ChatGPT in your daily use, specially if you are not working in a team that values pair programming. It has saved me a lot of time and mental effort. The answers are not always correct, but they constantly point me in the right direction. I’m currently not paying for the subscription, but it’s on my to-do list, so I can report later on if it’s worth it.

    Final Thoughts

    Just like with the introduction of search engines, I think we are at a similar inflection point. I’m not going to try to guess what AI will look like in the future, but from where I stand, I’m pretty sure AI will be a part of our everyday. For this reason, I think we really need to pay attention to it as individuals but also as a society. We must learn how to use it so that it can make our lives easier, that’s the whole point about technology, but we must understand that AI assistant are not encyclopedias, each tool has its purpose, advantages, and disadvantages. Talking about disadvantages, I don’t think we need to be afraid of it becoming conscious. But I do feel afraid of companies or governments abusing it, so we need to build these services with privacy for the individual and transparency. One of those solutions is the open-source project LocalAI, which I will share about in a next post.

  • Remote Setup with EdgeVPN

    Last week I started using my old 13″ laptop and left the bulky 15″ workstation permanently at my desk. This setup gives me portability without loosing power when I’m connected to my home network. Today, I decided to configure EdgeVPN on both devices to also have this setup while on the road.

    EdgeVPN makes use of tokens, to connect nodes to a network. Since I have a Nextcloud server, which keeps files in sync on both of my laptops. I decided to put the EdgeVPN configuration file there and created a small script that reads from it to generate the token, and decide which IP to give to each device, based on their hostname:

    #!/bin/sh
    TOKEN=$(cat /path/to/Nextcloud/edgevpn-config.yaml | base64 -w0)
    IP=""
    if [ "$(hostname)" = "zeno" ]; then
    	IP="10.1.0.10"
    elif [ "$(hostname)" = "seneca" ]; then
    	IP="10.1.0.11"
    fi
    
    if [ "$IP" = "" ]; then
    	echo "IP not configured for $(hostname)"
    	exit 1
    else
    	echo Using IP: $IP
    fi
    edgevpn --token="$TOKEN" --address="$IP/24"
    

    Plus I created systemd services so I can make use of systemctl instead of having to remember the name of that shell script

    [Unit]
    Description=EdgeVPN
    Wants=network.target
    
    [Service]
    ExecStart=/path/to/start-edgevpn.sh
    
    [Install]
    WantedBy=multi-user.target

    On any of the nodes, I can start EdgeVPN’s web UI and list all connected nodes

  • Revived my Dell XPS 9350

    My work laptop has been giving me some trouble since I first installed openSUSE Tumbleweed. At first, it was just small annoyances, like not properly syncing the time. But installing the OS again is a bit of a hassle, as much as I enjoy doing it, so I found a workaround to reset it whenever it broke. However, last week it started freezing multiple times during the day, the workaround was to hard shut down the machine, which was very annoying, but I was hoping would get fixed in a next upgrade. Tumbleweed has weekly upgrades, so waiting wasn’t that big of a deal. But with the latest update, my Docker setup stopped working with Earthly, which is my bread and butter for Kairos, so I decided to try a different distro.

    Switching distros can be a big deal, and I didn’t want to learn a new package manager and special configurations, so I went with something I’ve already used before, Ubuntu 23.04. But even then, I first waned to give it a try, just in case there was any red flag. So, I dusted off my personal Dell XPS 9350, an 8-year-old laptop, to test it out. So far, everything seems to be working well, much slower than the workstation but still good enough and way more portable, so I’m probably going to start leaving my workstation at the desk.

  • Added ARM/RPI support for Ubuntu on Kairos

    Work this week came with a few challenges, but with enough patience and some help I was able to get the Ubuntu flavor for Kairos working on the Raspberry Pi.

    https://github.com/kairos-io/kairos/pull/1170

    Feedback loops when working on ARM are very slow because I’m cross compiling and because I have to burn the images on the SD cards. Switching contexts between tickets is not easy, but it’s also ok for working on chores, learning other things or playing with new technologies. I was mostly playing with ChatGPT, Copilot and LocalAI which was a lot of fun.

    Most of the lost time was because of a misconfiguration of the serial console. The issue was caused by some files which were not copied in the right directory, but it was not so obvious and without video it was very tricky to debug. Thankfully, after looking into some errors in dmesg and a recommendation from a colleague, I was able to sort it out. Reminder that it’s always good to review one’s own code and to get away from the code occasionally and look at it with a fresh pair of eyes.

  • My Plans for FOSDEM 2023

    FOSDEM 2023 is just around the corner, and I’m very excited. This will be my first in-person tech conference since before the pandemic. My last talk in FOSDEM was back in 2016, so I’m very much looking forward to the presentation I will give on Sunday.

    There are many talks that I’m interested in listening to, and plenty of faces that I want to see in person again or for the first time. If you’re coming to FOSDEM and want to say hi, just ping me via mrls@hachyderm.io (preferably) or mauromrls@twitter.com (if you don’t have a Mastodon account)

    Saturday

    I plan to see all presentations in the Image-based Linux and Secure Measured Boot devroom.

    At some point, I will escape to the FOSS on Mobile Devices devroom because I’d like to meet with Lukas Hartman, the CEO of MNT, who will bring a prototype of the Reform Pocket. I also hope to talk about future extensions to the Reform Laptop.

    At 1500 hours, I plan to visit the Go Devroom to check a talk about Five Steps to Make Your Go Code Faster & More Efficient.

    At 1600 hours, I want to attend the SourceHut meetup. I’ve been occasionally using SourceHut CI for personal projects. I feel we are putting too many eggs in the GitHub basket, and I like the ethos and team behind SourceHut, so I’d like to get more involved.

    Around 1800 hours, if I’m not too tired, I plan to check some talks in the CI/CD Devroom.

    I hope to join some party afterward, but I have no idea if there are any plans.

    Sunday

    I’m training for a race in March, so I plan to run on Sunday morning before arriving to FOSDEM. If anyone is interested in joining, that’d be pretty cool. My plan is to run for an hour or so.

    Once I arrive, I’d like to visit some stands and hopefully buy some interesting books.

    Before 1200 hours, I plan to be in the Distributions room.

    At 12:30, I will give a presentation about How we build and maintain Kairos, the new project I’m involved with.

    I plan to spend the rest of the time socializing before taking the train back to Ghent.

  • Deploying a Go Microservice in Kubernetes

    Most of my experience with web applications is with monoliths deployed to a PaaS or using configuration management tools to traditional servers. Last week, I submerged myself in a different paradigm, the one of microservices. In this post, I’m going to share what I learned by deploying a Go application on top of Kubernetes.

    To follow along, you’ll need to have Go, Docker, and Kubernetes installed in your system. I’d recommend using something like K3d and K3s to install Kubernetes on your machine if you don’t have access to one already.

    The hello server

    First, we’ll start by writing an elementary microservice. Let’s create a web server that responds with Hello and whatever we passed as the path of the URL.

    In a clean directory, we initialize and tidy Go. You can replace my GitHub username, mauromorales, for yours.

    go mod init github.com/mauromorales/hello-server
    go mod tidy

    Create the file main.go which will describe our microservice.

    //A simple web server that responds with "Hello " and whatever you pass as the
    //path of the URL. It also logs the requests
    package main
    
    import (
    	"fmt"
    	"log"
    	"net/http"
    )
    
    func Log(handler http.Handler) http.Handler {
    	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
    		log.Printf("%s %s %s", r.RemoteAddr, r.Method, r.URL)
    		handler.ServeHTTP(w, r)
    	})
    }
    
    func main() {
    	http.HandleFunc("/", HelloServer)
    	http.ListenAndServe(":8080", Log(http.DefaultServeMux))
    }
    
    func HelloServer(w http.ResponseWriter, r *http.Request) {
    	fmt.Fprintf(w, "Hello, %s!\n", r.URL.Path[1:])
    }
    

    To test everything is working properly, run the microservice with

    go run main.go
    

    and on a different terminal make a web request

    % curl http://localhost:8080/Go
    Hello, Go!
    

    And you should also see a log for the request on the terminal running the hello-server

    % go run main.go 
    2022/12/30 11:42:42 127.0.0.1:59068 GET /Go
    

    A container version

    Kubernetes might be written in Go, but it wouldn’t understand how to deal with our hello-server. The minimal unit of processing in K8s is the container, so let’s put our code into one. In a file called Dockerfile add the following content:

    FROM golang as builder
    
    WORKDIR /app/
    
    COPY . .
    
    RUN CGO_ENABLED=0 go build -o hello-server /app/main.go
    
    FROM alpine:latest
    
    WORKDIR /app
    
    COPY --from=builder /app/ /app/
    
    EXPOSE 8080
    
    CMD ./hello-server
    

    Let’s first build an image

    docker build -t docker.io/mauromorales/hello-server:0.1.1 .
    

    When it’s done building, it will show up as one of your images

    % docker images
    REPOSITORY                     TAG            IMAGE ID       CREATED         SIZE
    mauromorales/hello-server      0.1.1          3960783c4afe   36 seconds ago   19.8MB
    

    So let’s run it

    docker run --rm -p 8080:8080 mauromorales/hello-server:0.1.1
    

    And in a different terminal test that it’s still working as expected

    % curl http://localhost:8080/Docker
    Hello, Docker!
    

    Looking back at the running container, we see that again our request was logged

    % docker run --rm -p 8080:8080 mauromorales/hello-server:0.1.1
    2022/12/30 10:48:57 172.17.0.1:58986 GET /Docker
    

    Deploying hello-server to Kubernetes

    Let’s begin by uploading the image we built in the last step, to the docker hub. For which you need an account.

    docker login --username mauromorales
    

    And once logged in, you can push the image

    docker push mauromorales/hello-server:0.1.1
    

    This process will be in iterations to understand the different components in K8S

    Pods

    Initially, we will only deploy a pod (a grouping mechanism for containers) of one single container. To achieve this, we create a file called pod.yaml add the following definition:

    apiVersion: v1
    kind: Pod
    metadata:
      name: hello-server
      labels:
        app: hello-server
    spec:
      containers:
        - name: hello-server
          image: mauromorales/hello-server:0.1.1
          imagePullPolicy: Always
          ports:
            - name: http
              containerPort: 5000
              protocol: TCP
    

    And apply it:

    % kubectl apply -f pod.yaml
    pod/hello-server created
    

    You should now see it listed:

    % kubectl get pods         
    NAME                         READY   STATUS    RESTARTS   AGE
    hello-server                 1/1     Running   0          111s
    

    While the pod is running in the background, we need to forward the port to access it:

    kubectl port-forward pod/hello-server 8080:8080
    

    Now you can test again that it is working, by curling to it from a different terminal.

    % curl http://localhost:8080/Pod
    Hello, Pod!
    

    But if you go back to the port forwarding, you will not see any logs. All you see are the logs of the port-forward command.

    % kubectl port-forward pod/hello-server 8080:8080
    Forwarding from 127.0.0.1:8080 -> 8080
    Forwarding from [::1]:8080 -> 8080
    Handling connection for 8080
    

    To read the pod logs, we require kubectl.

    % kubectl logs pod/hello-server 
    2022/12/30 10:59:56 127.0.0.1:51866 GET /Pod
    

    Services

    So far so good, but unfortunately a pod is not really implementing the microservice pattern. If the pod is restarted, it might lose its IP. For our little example this is not a problem, but if we were to have more than one microservice talking to each other, we would need to find a way to share the new IPs between the different microservices. Thankfully, Kubernetes comes with a solution to this issue, services.

    Let’s write one inside service.yaml

    apiVersion: v1
    kind: Service
    metadata:
      name: hello-server-svc
    spec:
      selector:
        app: hello-server
      ports:
        - protocol: TCP
          port: 8080
          targetPort: 8080
    

    Now, apply the service:

    % kubectl apply -f service.yaml 
    service/hello-server-svc created
    

    And as usual, we do some port forwarding, but this time to the service instead of the pod:

    kubectl port-forward service/hello-server-svc 8080:8080
    

    Let’s test it in the second terminal

    % curl http://localhost:8080/Service
    Hello, Service!
    

    And look at the service logs

    % kubectl logs service/hello-server-svc
    2022/12/30 11:15:00 127.0.0.1:39346 GET /Service
    

    Deployments

    This is looking much better now, if I wanted to scale this service, all I’d need to do is to create another pod, with the hello-server label. But this would be very tedious and error-prone. Thankfully, Kubernetes gives us deployments, which handle that for us and gives us deployment strategies. Let us then create a deployment with three replicas.

    First, we need to delete the pod we created.

    % kubectl delete pod/hello-server
    pod "hello-server" deleted
    

    And in a file called deployment.yaml add the following description:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: hello-server
      labels:
        app: hello-server
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: hello-server
      template:
        metadata:
          labels:
            app: hello-server 
        spec:
          containers:
          - name: hello-server
            imagePullPolicy: Always
            image: mauromorales/hello-server:0.1.1
            ports:
            - containerPort: 8080
    

    and apply it

    kubectl apply -f deployment.yaml
    

    When finished you should be able to list the generated pods

    % kubectl get pods
    NAME                           READY   STATUS    RESTARTS   AGE
    hello-server                    1/1     Running   0             7m13s
    hello-server-5c7c6f798f-hp99p   1/1     Running   0             13s
    hello-server-5c7c6f798f-f2b4c   1/1     Running   0             13s
    hello-server-5c7c6f798f-2fxdm   1/1     Running   0             13s
    

    The first pod, is the one we created manually, and the next three are the ones the deployment created for us.

    We start forwarding traffic to the deployment:

    kubectl port-forward service/hello-server-svc 8080:8080
    

    And test it out

    % curl http://localhost:8080/Deployment
    Hello, Deployment!
    

    Let us also check the logs:

    % kubectl logs deployment/hello-server
    Found 3 pods, using pod/hello-server-5c7c6f798f-hp99p
    2022/12/30 11:29:47 127.0.0.1:46420 GET /Deployment
    

    Port forwarding is nice, but at its current state will only map to one replica, which is less than ideal. In order to load-balance our service, we need to add an ingress rule.

    Create a file ingres.yaml with the following content

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: hello-server
      annotations:
        ingress.kubernetes.io/rewrite-target: /
        kubernetes.io/ingress.class: traefik
    spec:
      rules:
      - http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: hello-server-svc
                port:
                  number: 8080
    

    And apply it

    % kubectl apply -f ingress.yaml 
    ingress.networking.k8s.io/hello-server created
    

    As you probably expect it, we need to forward traffic, however this time instead of forwarding to our service, we forward to the traefik service (served on port 80):

    kubectl port-forward -n kube-system service/traefik 8080:80
    

    Let’s test it out by sending 20 requests this time

    % for i in `seq 1 20`; do curl http://localhost:8080/Traeffic; done
    ...
    Hello, Traeffic!
    

    And have a look at the logs

    kubectl logs deployment/hello-server
    Found 3 pods, using pod/hello-server-5c7c6f798f-hp99p
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    

    And we can also check the individual logs of each pod

    % kubectl logs pod/hello-server-5c7c6f798f-hp99p --since=4m
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    seneca% kubectl logs pod/hello-server-5c7c6f798f-f2b4c --since=4m
    2022/12/30 11:43:14 10.42.0.22:59596 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:59596 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:59596 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:59596 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:59596 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:59596 GET /Traeffic
    seneca% kubectl logs pod/hello-server-5c7c6f798f-2fxdm --since=4m
    2022/12/30 11:43:14 10.42.0.22:40840 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:40840 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:40840 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:40840 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:40840 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:40840 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:40840 GET /Traeffic
    

    And voilà, you can see how it was properly distributed between our 3 replicas.

    I hope you had as much fun as I did playing with Kubernetes.

  • Introductory Course to Kubernetes on Edge with K3s

    There’s a lot I need to learn for the new project I’m working on. Finding information is not really the problem, there’s quite a lot out there, what is hard is to filter through all the information you might not need and connecting the dots to have a clear perspective on your mind. I found this introductory course to Kubernetes on Edge with K3s, very useful.

  • It’s Kairos Time

    I’m excited to announce that I joined Spectro Cloud. I’ll be part of the team building Kairos, the immutable Linux meta-distribution for edge Kubernetes.

    Ok, a lot to unpack there, and I’m still very new to it, so I have numerous questions myself, but for my own sake, I will unwrap it:

    • Immutable Linux: there are some parts of the file system in the OS that are read-only. This means that if you want to add a package, or make some configuration change, you need to build a new image of the OS with the given changes. This is good for two reasons, on the one hand, it reduces the attack surface and, on the other, it helps to roll back to a specific version of the OS.
    • Meta-distribution: you can pick the flavor of the base Linux distribution on which Kairos is built. For what I can tell, openSUSE, Ubuntu and Alpine are already available, but others could follow up.
    • Edge computing: systems nowadays are being centralized in datacenters. While this can be beneficial in some cases, it can also be unpractical for others. When you have a system, running as far away from the datacenter, then you’re running at the edge of the network. For example, a computer in a parking lot, taking pictures of car plates and calculating the amount they need to be charged.
    • Kubernetes: it’s a platform to deploy applications. It was started by Google and became quite popular. It’s now part of the CNCF.

    For as long as I can remember, I’ve been a Linux enthusiast, so I’m very much looking forward to this experience.