Mauro Morales

software developer

Category: Tutorials

  • Remote Setup with EdgeVPN

    Last week I started using my old 13″ laptop and left the bulky 15″ workstation permanently at my desk. This setup gives me portability without loosing power when I’m connected to my home network. Today, I decided to configure EdgeVPN on both devices to also have this setup while on the road.

    EdgeVPN makes use of tokens, to connect nodes to a network. Since I have a Nextcloud server, which keeps files in sync on both of my laptops. I decided to put the EdgeVPN configuration file there and created a small script that reads from it to generate the token, and decide which IP to give to each device, based on their hostname:

    #!/bin/sh
    TOKEN=$(cat /path/to/Nextcloud/edgevpn-config.yaml | base64 -w0)
    IP=""
    if [ "$(hostname)" = "zeno" ]; then
    	IP="10.1.0.10"
    elif [ "$(hostname)" = "seneca" ]; then
    	IP="10.1.0.11"
    fi
    
    if [ "$IP" = "" ]; then
    	echo "IP not configured for $(hostname)"
    	exit 1
    else
    	echo Using IP: $IP
    fi
    edgevpn --token="$TOKEN" --address="$IP/24"
    

    Plus I created systemd services so I can make use of systemctl instead of having to remember the name of that shell script

    [Unit]
    Description=EdgeVPN
    Wants=network.target
    
    [Service]
    ExecStart=/path/to/start-edgevpn.sh
    
    [Install]
    WantedBy=multi-user.target

    On any of the nodes, I can start EdgeVPN’s web UI and list all connected nodes

  • Deploying a Go Microservice in Kubernetes

    Most of my experience with web applications is with monoliths deployed to a PaaS or using configuration management tools to traditional servers. Last week, I submerged myself in a different paradigm, the one of microservices. In this post, I’m going to share what I learned by deploying a Go application on top of Kubernetes.

    To follow along, you’ll need to have Go, Docker, and Kubernetes installed in your system. I’d recommend using something like K3d and K3s to install Kubernetes on your machine if you don’t have access to one already.

    The hello server

    First, we’ll start by writing an elementary microservice. Let’s create a web server that responds with Hello and whatever we passed as the path of the URL.

    In a clean directory, we initialize and tidy Go. You can replace my GitHub username, mauromorales, for yours.

    go mod init github.com/mauromorales/hello-server
    go mod tidy

    Create the file main.go which will describe our microservice.

    //A simple web server that responds with "Hello " and whatever you pass as the
    //path of the URL. It also logs the requests
    package main
    
    import (
    	"fmt"
    	"log"
    	"net/http"
    )
    
    func Log(handler http.Handler) http.Handler {
    	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
    		log.Printf("%s %s %s", r.RemoteAddr, r.Method, r.URL)
    		handler.ServeHTTP(w, r)
    	})
    }
    
    func main() {
    	http.HandleFunc("/", HelloServer)
    	http.ListenAndServe(":8080", Log(http.DefaultServeMux))
    }
    
    func HelloServer(w http.ResponseWriter, r *http.Request) {
    	fmt.Fprintf(w, "Hello, %s!\n", r.URL.Path[1:])
    }
    

    To test everything is working properly, run the microservice with

    go run main.go
    

    and on a different terminal make a web request

    % curl http://localhost:8080/Go
    Hello, Go!
    

    And you should also see a log for the request on the terminal running the hello-server

    % go run main.go 
    2022/12/30 11:42:42 127.0.0.1:59068 GET /Go
    

    A container version

    Kubernetes might be written in Go, but it wouldn’t understand how to deal with our hello-server. The minimal unit of processing in K8s is the container, so let’s put our code into one. In a file called Dockerfile add the following content:

    FROM golang as builder
    
    WORKDIR /app/
    
    COPY . .
    
    RUN CGO_ENABLED=0 go build -o hello-server /app/main.go
    
    FROM alpine:latest
    
    WORKDIR /app
    
    COPY --from=builder /app/ /app/
    
    EXPOSE 8080
    
    CMD ./hello-server
    

    Let’s first build an image

    docker build -t docker.io/mauromorales/hello-server:0.1.1 .
    

    When it’s done building, it will show up as one of your images

    % docker images
    REPOSITORY                     TAG            IMAGE ID       CREATED         SIZE
    mauromorales/hello-server      0.1.1          3960783c4afe   36 seconds ago   19.8MB
    

    So let’s run it

    docker run --rm -p 8080:8080 mauromorales/hello-server:0.1.1
    

    And in a different terminal test that it’s still working as expected

    % curl http://localhost:8080/Docker
    Hello, Docker!
    

    Looking back at the running container, we see that again our request was logged

    % docker run --rm -p 8080:8080 mauromorales/hello-server:0.1.1
    2022/12/30 10:48:57 172.17.0.1:58986 GET /Docker
    

    Deploying hello-server to Kubernetes

    Let’s begin by uploading the image we built in the last step, to the docker hub. For which you need an account.

    docker login --username mauromorales
    

    And once logged in, you can push the image

    docker push mauromorales/hello-server:0.1.1
    

    This process will be in iterations to understand the different components in K8S

    Pods

    Initially, we will only deploy a pod (a grouping mechanism for containers) of one single container. To achieve this, we create a file called pod.yaml add the following definition:

    apiVersion: v1
    kind: Pod
    metadata:
      name: hello-server
      labels:
        app: hello-server
    spec:
      containers:
        - name: hello-server
          image: mauromorales/hello-server:0.1.1
          imagePullPolicy: Always
          ports:
            - name: http
              containerPort: 5000
              protocol: TCP
    

    And apply it:

    % kubectl apply -f pod.yaml
    pod/hello-server created
    

    You should now see it listed:

    % kubectl get pods         
    NAME                         READY   STATUS    RESTARTS   AGE
    hello-server                 1/1     Running   0          111s
    

    While the pod is running in the background, we need to forward the port to access it:

    kubectl port-forward pod/hello-server 8080:8080
    

    Now you can test again that it is working, by curling to it from a different terminal.

    % curl http://localhost:8080/Pod
    Hello, Pod!
    

    But if you go back to the port forwarding, you will not see any logs. All you see are the logs of the port-forward command.

    % kubectl port-forward pod/hello-server 8080:8080
    Forwarding from 127.0.0.1:8080 -> 8080
    Forwarding from [::1]:8080 -> 8080
    Handling connection for 8080
    

    To read the pod logs, we require kubectl.

    % kubectl logs pod/hello-server 
    2022/12/30 10:59:56 127.0.0.1:51866 GET /Pod
    

    Services

    So far so good, but unfortunately a pod is not really implementing the microservice pattern. If the pod is restarted, it might lose its IP. For our little example this is not a problem, but if we were to have more than one microservice talking to each other, we would need to find a way to share the new IPs between the different microservices. Thankfully, Kubernetes comes with a solution to this issue, services.

    Let’s write one inside service.yaml

    apiVersion: v1
    kind: Service
    metadata:
      name: hello-server-svc
    spec:
      selector:
        app: hello-server
      ports:
        - protocol: TCP
          port: 8080
          targetPort: 8080
    

    Now, apply the service:

    % kubectl apply -f service.yaml 
    service/hello-server-svc created
    

    And as usual, we do some port forwarding, but this time to the service instead of the pod:

    kubectl port-forward service/hello-server-svc 8080:8080
    

    Let’s test it in the second terminal

    % curl http://localhost:8080/Service
    Hello, Service!
    

    And look at the service logs

    % kubectl logs service/hello-server-svc
    2022/12/30 11:15:00 127.0.0.1:39346 GET /Service
    

    Deployments

    This is looking much better now, if I wanted to scale this service, all I’d need to do is to create another pod, with the hello-server label. But this would be very tedious and error-prone. Thankfully, Kubernetes gives us deployments, which handle that for us and gives us deployment strategies. Let us then create a deployment with three replicas.

    First, we need to delete the pod we created.

    % kubectl delete pod/hello-server
    pod "hello-server" deleted
    

    And in a file called deployment.yaml add the following description:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: hello-server
      labels:
        app: hello-server
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: hello-server
      template:
        metadata:
          labels:
            app: hello-server 
        spec:
          containers:
          - name: hello-server
            imagePullPolicy: Always
            image: mauromorales/hello-server:0.1.1
            ports:
            - containerPort: 8080
    

    and apply it

    kubectl apply -f deployment.yaml
    

    When finished you should be able to list the generated pods

    % kubectl get pods
    NAME                           READY   STATUS    RESTARTS   AGE
    hello-server                    1/1     Running   0             7m13s
    hello-server-5c7c6f798f-hp99p   1/1     Running   0             13s
    hello-server-5c7c6f798f-f2b4c   1/1     Running   0             13s
    hello-server-5c7c6f798f-2fxdm   1/1     Running   0             13s
    

    The first pod, is the one we created manually, and the next three are the ones the deployment created for us.

    We start forwarding traffic to the deployment:

    kubectl port-forward service/hello-server-svc 8080:8080
    

    And test it out

    % curl http://localhost:8080/Deployment
    Hello, Deployment!
    

    Let us also check the logs:

    % kubectl logs deployment/hello-server
    Found 3 pods, using pod/hello-server-5c7c6f798f-hp99p
    2022/12/30 11:29:47 127.0.0.1:46420 GET /Deployment
    

    Port forwarding is nice, but at its current state will only map to one replica, which is less than ideal. In order to load-balance our service, we need to add an ingress rule.

    Create a file ingres.yaml with the following content

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: hello-server
      annotations:
        ingress.kubernetes.io/rewrite-target: /
        kubernetes.io/ingress.class: traefik
    spec:
      rules:
      - http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: hello-server-svc
                port:
                  number: 8080
    

    And apply it

    % kubectl apply -f ingress.yaml 
    ingress.networking.k8s.io/hello-server created
    

    As you probably expect it, we need to forward traffic, however this time instead of forwarding to our service, we forward to the traefik service (served on port 80):

    kubectl port-forward -n kube-system service/traefik 8080:80
    

    Let’s test it out by sending 20 requests this time

    % for i in `seq 1 20`; do curl http://localhost:8080/Traeffic; done
    ...
    Hello, Traeffic!
    

    And have a look at the logs

    kubectl logs deployment/hello-server
    Found 3 pods, using pod/hello-server-5c7c6f798f-hp99p
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    

    And we can also check the individual logs of each pod

    % kubectl logs pod/hello-server-5c7c6f798f-hp99p --since=4m
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:43070 GET /Traeffic
    seneca% kubectl logs pod/hello-server-5c7c6f798f-f2b4c --since=4m
    2022/12/30 11:43:14 10.42.0.22:59596 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:59596 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:59596 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:59596 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:59596 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:59596 GET /Traeffic
    seneca% kubectl logs pod/hello-server-5c7c6f798f-2fxdm --since=4m
    2022/12/30 11:43:14 10.42.0.22:40840 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:40840 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:40840 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:40840 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:40840 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:40840 GET /Traeffic
    2022/12/30 11:43:14 10.42.0.22:40840 GET /Traeffic
    

    And voilà, you can see how it was properly distributed between our 3 replicas.

    I hope you had as much fun as I did playing with Kubernetes.

  • Rails Routing Advanced Constraints for User Authentication Without Devise

    Many times we mount engines and restrict access to admin users via Devise. In this post, I’ll show you how to do the same when using a different authentication mechanism.

    Let’s take for example the Sidekiq engine. According to their wiki, all we need to do is surround the mount using the authenticate method.

    # config/routes.rb
    authenticate :user, ->(user) { user.admin? } do
      mount Sidekiq::Web => '/sidekiq'
    end

    But since this method is a Devise helper method, how can we achieve the same results when we use a different authentication mechanism?

    Turns out it’s actually very simple. We can use a Rails’ advanced constraint.

    # config/routes.rb
    mount Sidekiq::Web, at: '/sidekiq', constraints: AdminConstraint

    Not too shabby! It looks even better than the Devise helper method IMO. But let’s dive into this constraint.

    For the sake of simplification, I will assume that our authentication mechanism consist of a JWT token which gets saved on a cookie and a service which takes care of verifying that token. This service will also return a user when successful or nil otherwise. Replace this behaviour for whatever mechanism you have instead.

    # app/constraints/admin_contraint.rb
    class AdminConstraint
      class << self
        def matches?(request)
          user = TokenAuthenticationService.new(request.cookies['authToken']).call
          user.present? && user.admin?
        end
      end
    end

    Yes, it’s a bit more code, but not that much and it allows us to leave the routes file a bit cleaner and to have a single place where to define what access to admin means.

    Let’s finish the job by adding a test. I like RSpec, so I’ll write a request tests.

    I’ll also assume that you have a token generation service.

    # spec/constraints/admin_constraint_spec.rb
    require "rails_helper"
    
    # we won't want to rely on sidekiq for our test, so we'll create a dummy Engine
    module MyEngine
      class Engine < ::Rails::Engine
        isolate_namespace MyEngine
      end
    
      class LinksController < ::ActionController::Base
        def index
          render plain: 'hit_engine_route'
        end
      end
    end
    
    MyEngine::Engine.routes.draw do
      resources :links, :only => [:index]
    end
    
    module MyEngine
      RSpec.describe "Links", :type => :request do
        include Engine.routes.url_helpers
    
        before do
          Rails.application.routes.draw do
            mount MyEngine::Engine => "/my_engine", constraints: AdminConstraint
          end
    
          cookies['authToken'] = token
        end
    
        after do
          Rails.application.routes_reloader.reload!
        end
    
        let(:token) { TokenGeneratorService.new(user).call }
    
        context 'with an admin token cookie' do
          let(:user) { create(:user, admin: true) }
    
          it "is found" do
            get links_url
    
            expect(response).to have_http_status(:ok)
            expect(response.body).to eq('hit_engine_route')
          end
        end
    
        context 'with a non-admin user' do
          let(:user) { create(:user, admin: false) }
    
          it "is not found" do
            expect {
              get links_url
            }.to raise_error(ActionController::RoutingError)
          end
        end
      end
    end

    Et voila! We’re sure that our constraint behaves as expected.

    Resources

    All the code in this post was based on the documentation from the following projects:

  • Running MNT Reform OS on an NVME Disk

    Running the MNT Reform 2 from an SD card is not a bad solution. It’s similar to the way a Raspberry Pi is run. However, I wanted to free the SD card slot. In this post I describe the whole process from picking and buying an NVMe SSD, to installing and configuring it.

    But before I continue, I cannot take credit for this work, as it’s summarized in the Operating System on NVMe Without SD Card Post. I just wanted to give a little more detail into the steps I took, some of the mistakes I made, and add some information related to using an encrypted device.

    PARTS AND TOOLS NEEDED

    • 1 NVMe disk
    • 1 Phillips screw driver
    • 1 M2x4mm pan head screw (included in the DIY kit)

    PICKING AND BUYING AN NVME DISK

    I bought the one that MNT puts on the assembled version of the Reform 2, a 1Tb Transcend MTE220S because I didn’t want to risk it. I bought it from amazon.de (I tried to look for it in local businesses in Belgium but I couldn’t find it in the ones I was suggested to check). The price was around 125 Euro with shipping included.

    There’s a community page on Confirmed Working NVMe Drives that will hopefully hold more options in the future but so far, the Transcend disk seems like a very good one.

    INSTALLING THE DISK

    1. Disconnect the laptop from the power
    2. Discharge yourself by touching a metal surface or using a discharge bracelet
    3. Remove the acrylic bottom
    4. Remove the batteries
    5. Place the NVMe device in the M2 socket
    6. Secure it

    Do not close the laptop just yet. Turn it around, plug the power, turn it on and log in with your user. If the installation was successful, you should be able to see the device on the Disks application

    PARTITIONING, FORMATTING AND ENCRYPTING

    The next step is to create one or more partitions on the disk. I used Gnome Disks but it’s limited because you cannot do logical volumes, so you might want to install gparted or follow some tutorial for the CLI on how to achieve your specific partitioning setup.

    Note: If you are planning to use the whole disk without partitioning and only format the disk, using the Drive Options menu (3 dots at the top right corner). The script mounting your partition /sbin/reform-init might have issues because of the name of the device. At least that’s what I experienced the first time I did this process.

    My current setup is one encrypted partition with ext4 file system for root and one encrypted partition with ext4 file system for home (I will write about this in a next post). This means that I have to enter two passwords when booting. I’m planning to use a key in the future but if you don’t want to have to enter two passwords, read about logical volumes, or if you don’t want encryption at all then you don’t have to worry about this.

    1. Select the NVMe Disk
    2. Click on the + sign to create a new partition
    3. Select the size (needs to be at least the size of the SD card) and continue
    4. Give it a name e.g. “root”
    5. Select ext4 as your file system
    6. Select encryption with LUKS
    7. Press “Next”
    8. Add a pass phrase
    9. Press “Done”

    MIGRATE YOUR DATA

    To copy all your data in the SD card to the NVMe disk, we first need to unlock the disk. The first argument is the path to the device, so it needs to map whatever partition number you did in the previous step. The second argument is the name you want to give, so choose whatever you prefer.

    # cryptsetup luksOpen /dev/nvme0n1p1 crypt

    The unencrypted partition will be accessible on /dev/mapper/crypt

    We can use that path to run the reform-migrate script

    # reform-migrate /dev/mapper/crypt

    You can of course use Disks to unlock (open lock button) and mount (play button) the device instead. You will need to use the following command to move all your data.

    # rsync -axHAWXS --numeric-ids --info=progress2 / /media/USER/NAME

    Make sure to update the last argument to be the path to where you mounted the device.

    CONFIGURE BOOTING FROM NVME

    Booting from the NVMe disk is a two step process. We first need to configure the laptop to boot from the eMMC drive, and configure it to decrypt and mount the NVMe drive and init from it.

    Read more about this topic on Section 10.2 and 10.3 from the Operators Handbook.

    To switch the booting mechanism from the SD card to the inner eMMC module where the MNT Rescue disk resides, we need to flip a dip switch that resides underneath the heat sink.

    1. Shutdown and disconnect from power
    2. Remove the heat sink (be careful not to put the heat sink bottom flat on top of a surface since there’s some paste in it)
    3. Flip the dip switch on the bottom right (or top left depending on your perspective)
    4. Place the heat sink back in place

    We can now plug the power again and start the machine. When prompted for a logging you need to use “root” without a password since this is a completely different system from the one configured on the SD card.

    Now we need to download a newer version of U-boot in the rescue disk.

    # wget http://mntre.com/reform_md/flash-rescue-reform-init.bin

    U-boot is a mini OS used to boot Linux. For what I understand, this “newer” version is just the same version than is in the SD drive, so trying that instead of downloading a new one would also be an option.

    To flash the new U-boot we need to unlock the boot partition

    # echo 0 > /sys/class/block/mmcblk0boot0/force_ro

    And flash the binary

    # dd if=flash-rescue-reform-init.bin of=/dev/mmcblk0boot0 bs=1024 seek=33

    Now that we have this U-boot version in place, we can configure it to boot from the NVMe drive

    # reform-boot-config nvme

    This creates the file /reform-boot-medium with the word nvme in it. This is important because it’s used by reform-init.

    Note: One important thing to mention is that reform-init will only try to unlock and mount the encrypted partition under /dev/nvme0n1p1. With a different setup, one needs to go and modify this script to the right path. I stumbled across this problem on my first attempt but it was quite simple to debug and to help me understand better what’s going on under the hood.

    If everything went well you should be able to reboot the device and it will boot from the NVMe drive successfully. To finalize this process

    1. Shutdown the system and unplug it
    2. Put the batteries back in place (be careful with the polarity)
    3. Place the acrylic bottom
  • Running a Patched Ruby on Heroku

    You use a PaaS because you want all the underlying infrastructure and configuration of your application to be hidden from you. However, there are times when you are forced to look deeper into the stack. In this article I want to share how simple it is to run a patched version of Ruby on Heroku.

    CONTEXT

    It all started while trying to upgrade Ruby in an application. Unfortunately, every newer version I tried made the application break. After some searching around, I came across a bug report from 3 years ago in Ruby upstream.

    The issue was actually not in Ruby but in Onigmo, the regular expressions library that Ruby uses under the hood. All versions since 2.4 where affected i.e. all supported versions including 2.5.8, 2.6.6 and 2.7.1 at the moment of writing. Lucky for me, Onigmo had been patched upstreambut the patch will only land in Ruby 2.7 later this year.

    This meant that, I was going to have to patch Ruby myself. For local development this is not a big deal, but I wasn’t sure if it was possible to do on Heroku. I remembered from CloudFoundry and Jenkins-X that the part of the platform taking care of the build and installation of the language were the buildpacks, so I decided to investigate about buildpacks on Heroku.

    HEROKU’S RUBY BUILDPACK

    Heroku’s Ruby buildpack is used to run your application whenever there’s a Gemfile and Gemfile.lock file. From parsing these, it figures out which version of Ruby it’s meant to use.

    Once it knows which version of Ruby to install, it runsbin/support/download_ruby, to download a pre built package and extracts it to be available for execution to your application. As a quick hack, I decided to modify this file to do what I did in my development environment to patch Ruby.

    1. First download the Ruby source code from upstream instead of the pre built version by Heroku.curl --fail --silent --location -o /tmp/ruby-2.6.6.tar.gz https://cache.ruby-lang.org/pub/ruby/2.6/ruby-2.6.6.tar.gz tar xzf /tmp/ruby-2.6.6.tar.gz -C /tmp/src cd /tmp/src/ruby-2.6.6
    2. Then apply a patch from a file I placed under bin/support/ (probably not the best place but OK while I was figuring things out).patch < "$BIN_DIR/support/onigmo-fix.diff"
    3. And finally build and install Rubyautoconf ./configure --disable-install-doc --prefix "$RUBY_BOOTSTRAP_DIR" --enable-load-relative --enable-shared make make install

    You can find an unpolished but working version of what I did here

    USING THE BUILDPACK IN YOUR APPLICATION

    Now all that is left is to tell your application to use your custom buildpack instead of Heroku’s supported one. You can do this in the command line by running

    heroku buildpacks:set https://github.com/some/buildpack.git -a myapp

    Or by adding a file called app.json at the root directory of your application sources (not in the buildpack sources). I ended up using this form since I prefer to have as much of the platform configuration in code.

    {
      "environments": {
        "staging": {
          "addons": ["heroku-postgresql:hobby-dev"],
          "buildpacks": [
            {
              "url": "https://github.com/some/buildpack.git"
            }
          ]
        }
      }
    }

    Now every time a deployment is made to this environment, the Ruby application will download the Ruby sources, patch, build and install them.

    This of course is not very optimal since you’ll be wasting a lot of time building Ruby. Instead you should do something similar to what Heroku is doing by pre building the patched version of Ruby, and downloading it from an S3 bucket. {: .notice–warning}

    CONCLUSION

    Using a patched version of Ruby comes with a heavy price tag, the maintenance. You should still apply updates until that patch is fixed upstream (at least security updates). And you also need to use the patched version in all your environments e.g. production, staging, et al. including your CI. Whether all this extra work is worth it, is something you’ll need to analyze. In the cases when the benefits outweigh the costs, it’s great to know that you don’t have to give up all the benefits of a platform like Heroku to run your own version of Ruby.

  • Installing openSUSE Tumbleweed On the Dell XPS 13

    This post will show you how install openSUSE’s rolling release, Tumbleweed, on the Dell XPS 13 9350 FHD.

    Update 2016–06–30: Bios 1.4.4 is out.

    Update 2016–06–22: The kernel flag is not needed anymore since kernel 4.6 which was introduced around Tumbleweed version 20160612.

    Update 2016–05–04: Added a section to fix the sound issues when using headphones.

    PREPARATION

    1. Create a recovery USB in case you want return the machine to it’s original state.
    2. Get yourself a copy of openSUSE Tumbleweed.
    3. Create a bootable USB. There are instructions for the Linux, Windows and OS X.

    UPDATE THE BIOS

    Warning: Do not reboot the machine when the BIOS update is running!

    1. Download the latest BIOS update (1.3.3 at the time of writing).
    2. Save it under /boot/EFI.
    3. Reboot the machine.
    4. Press F12 and select BIOS update.

    INSTALLATION

    1. Reboot the machine.
    2. Press F12 and configure to use Legacy BIOS and reboot.
    3. Boot from the Tumbleweed USB key and follow the installer instructions until you get to the partitioning stage.
    4. Remove all partitions and create an MSDOS partition table.
    5. Add your desired partitions inside the just created partition table. In my case I have a root, a swap and a home partition.
    6. Finish the installation process.

    FIXING THE FLICKERING DISPLAY

    Note: This issue was fixed on kernel 4.6, here is the bugzilla link.

    There is a reported issue that causes your screen to flicker. Until the fix gets merged into the kernel you can do this hack:

    1. Inside /etc/default/grub add the kernel flag i915.enable_rc6=0
    2. grub2-mkconfig -o /boot/grub2/grub.cfg
    3. Restart your machine.

    FIXING THE SOUND WHEN USING HEADPHONES

    When using headphones you will notice a high pitch when no sound is being played and a loud cracking sound when starting/stopping sound from an application.

    First fix the issue with the high pitch by setting the microphone boost volume.

    amixer -c 0 cset 'numid=10' 1

    To fix the problem with the cracking sound the only fix that I’ve found so far is to disable the SOUND_POWER_SAVE_ON_BAT option on tlp.

    augtool set /files/etc/default/tlp/SOUND_POWER_SAVE_ON_BAT 0

    You will need to reapply the battery settings for changes to take effect and set it up to be started at boot time.

    systemctl enable tpl.service --now

    Have a lot of fun…

  • Running Multiple Redis Instances

    This article will teach you how to run one or more Redis instances on a Linux server using systemd to spawn copies of a service.

    INSTALLING REDIS

    The easiest way to install Redis in Linux is with your distributions package manager. Here is how you would do it on openSUSE:

    sudo zypper install redis

    In case your distribution doesn’t provide a Redis package, you can always follow the upstream instructions to compile it from scratch.

    CONFIGURING A REDIS INSTANCE

    1. Make a copy of the example/default file that is provided by the packagecd /etc/redis/ cp default.conf.example my_app.conf Use a name that will help you recognize the purpose of the instance. For example if each instance will be mapped to a different application give it the name of the application. If each instance will be mapped to the same application use the port in which it will be running.
    2. Change the ownership of the newly created configuration file to user “root” and group “redis”chown root.redis my_app.conf
    3. ConfigurationAdd a “pidfile”, a “logfile” and a “dir” to the .conf file.pidfile /var/run/redis/my_app.pid logfile /var/log/redis/my_app.log dir /var/lib/redis/my_app/ Each of these attributes has to match with the name of the configuration file without the extension.Make sure the “daemonize” option is set to “no” (this is the default value). If you set this option to yes Redis and systemd will interfere with each other when spawning the processes.daemonize no Define a “port” number and remember that each instance should be running on a different port.port 6379
    4. Create the database directory at the location given in the configuration fileinstall -d -o redis -g redis -m 0750 /var/lib/redis/my_app The database directory has to be owned by user “redis” and group “redis” and with permissions 750.

    Repeat these steps for every instance you want to set up. In my case I set up a second instance called “my_other_app”

    .
    ├── default.conf.example
    ├── my_app.conf
    └── my_other_app.conf

    ADDING UNITS TO SYSTEMD FOR THE REDIS SERVICE

    In order for systemd to know how to enable and start each instance individually you will need to add a service unit inside the system configuration directory located at /etc/systemd/system. For convenience you might also want to start/stop all instances at once. For that you will need to add a target unit.

    In case you installed Redis on openSUSE these two files will be already provided for you under the system unit directory /usr/lib/systemd/system.

    1. Create the service unit file “redis@.service” with the following contents:[Unit] Description=Redis After=network.target PartOf=redis.target[Service] Type=simple User=redis Group=redis PrivateTmp=true PIDFile=/var/run/redis/%i.pid ExecStart=/usr/sbin/redis-server /etc/redis/%i.conf Restart=on-failure[Install] WantedBy=multi-user.target redis.target The unit file is separated in sections. Each section consists of variables and the value assigned to them. In this example:
      • After: when the Redis instance is enabled it will get started only after the network has been started.
      • PartOf: this instance belongs to the redis.target and will get started/stopped as part of that group.
      • Type: simple means the service process doesn’t fork.
      • %i: a specifier that is expanded by systemd to the “my_app” instance.
    2. Create the target unit file “redis.target” with the following contents:[Unit] Description=Redis target allowing to start/stop all redis@.service instances at once

    INTERACTING WITH REDIS

    If everything went as expected you should be able to interact with the individual instances:

    systemctl start redis@my_app
    systemctl enable redis@my_other_app

    And also with all the instances at the same time:

    systemctl restart redis.target
    systemctl stop redis.target

    TROUBLESHOOTING

    If things didn’t go as expected and you cannot start the instance make sure to check the instance’s status:

    systemctl status redis@my_app

    If the issue doesn’t show up there then check systemd’s journal:

    journalctl -u redis@my_app

    For example if you forgot to give the right permissions to the configuration file you’d see something like this inside the journal:

    Apr 23 10:02:53 mxps redis-server[26966]: 26966:C 23 Apr 10:02:53.917
    # Fatal error, can’t open config file ‘/etc/redis/my_app.conf’

    ACKNOWLEDGMENTS

    • Thanks to the openSUSE Redis package maintainers for creating such a nice package that you can learn from it.
    • The book How Linux Works provided the details on how systemd instances work.
  • Profiling Vim

    I like Vim because it’s very fast. Unfortunately the other day I found myself opening a diff file that took forever to load. The file had 58187 (this number will be important later on) lines in it but I never thought Vim would choke with something that was less than 2M size.

    This post was originally published on medium

    FINDING OUT WHICH PLUGIN IS MAKING VIM SLOW

    If you find yourself in a similar situation this is what you can do in order to find out what is causing Vim to slow down.

    1. Open Vim and start profiling:profile start /tmp/profile.log :profile func * :profile file * This is telling Vim to save the results of the profile into/tmp/profile.log and to run the profile for every file and function.Note: The profile.log file only gets generated until you close Vim.
    2. Do the action that is taking a long time to run (in my case opening the diff file):edit /tmp/file.diff :profile pause :q!
    3. Analyze the dataThere is a lot of information in /tmp/profile.log but you can start by focusing on the Total time. In my case there was a clear offender with a total time of more than 14 seconds! And it looked like this:FUNCTION <SNR>24_PreviewColorInLine() Called 58187 times Total time: 14.430544 Self time: 2.961442 Remember the number of lines in the file I mentioned before? For me it was interesting to see that the function gets called just as many times.
    4. Pinpoint the offenderFinding out where a function is defined is very easy thanks to the <SNR> tag and the number right after it. You simply need to run :scriptnames and scroll until you find the index number you are looking for, in my case 24.24: ~/.vim/bundle/colorizer/autoload/colorizer.vim

    I opened up a GitHub issue to make the developers of the plugin aware but it seems as if the project has been left unmaintained so I decided to remove it from my vimrc file.

  • Running openSUSE 13.2 on Linode

    Linode is one of my favorite VPS providers out there. One of the reasons why I like them is because they make it extremely easy to run openSUSE. This post is a quick tutorial on how to get you started.

    The first time you log in you will be presented with the different Linode plans. And the Location where your server will reside.

    I’ll choose the smallest plan

    Once you see your Linode listed, click on it’s name

    Now click on “Deploy an image”

    In there we will select openSUSE 13.2, the amount of space in disk. You can leave the defaults which will choose for the full disk size with a 256MB swap partition. Choose your password and click Deploy.

    This will take a bit but as soon as it’s done you will be able to Boot your machine.

    Finally click on the “Remote Access” tab so you can see different options to log into your machine.

    I personally like to ssh in from my favorite terminal app

    ssh root@10.0.0.10

    You will be welcomed by openSUSE with the following message:

    Have a lot of fun...
    linux:~ #

    Now you can play with your new openSUSE 13.2 box. Enjoy!

  • Getting Started With Continuous Delivery

    More and more companies are requiring developers to understand Continuous Integration and Continuous Delivery but starting to implement it in your projects can be a bit overwhelming. Start with a simple website and soon enough you will feel more confident to do with more complex projects.

    THE RIGHT MINDSET

    TDD/BDD, CI/CD, XP, Agile, Scrum …. Ahhhhh, leave me alone I just want to code!

    Yes, all this methodologies can be a bit complicated at first, but simply because you are not used to them. Like a muscle you need to train them and the more you do so, the sooner you won’t feel like doing them is a total waste of time.

    Once you have made up your mind that CD is for you, your team or your project then you will need to define a process and follow it. Don’t make it easy to break the process and before you know it you and your team will feel like fish in the water.

    AUTOMATE A SIMPLE WEBSITE DEPLOYMENT

    There are many ways you can solve this problem. I will use a certain stack. If you don’t have experience with any of the tools, try to implement it with one you do have experience with.

    StackTool/ServiceAlternatives
    VPSDigitalOceanLinode or Vagrant
    Configuration ManagementAnsibleChef or Puppet
    Static site generatorMiddlemanJekyll or pure HTML
    CI/CD ServerSemaphoreCodeship or Jenkins

    The first thing is to create a new droplet in DO (you could also do this with Ansible but we won’t at this tutorial). Make sure there is a deployuser and to set up ssh keys for it (again something we could do with Ansible but we’ll leave that for another post) Setup your your domain to point to the new server’s IP address, I will use ‘example.com’.

    ANSIBLE

    Create a folder for your playbook and inside of it start with a file calledansible.cfg. There we will override the default configuration by pointing to a new inventory inside your playbook’s folder and specify the deploy user.

    [defaults]
    hostfile=inventory
    remote_user=deploy
    

    Now in our inventory file we specify a group called web and include our domain.

    [web]
    example.com
    

    Our tasks will be defined in simple-webserver.yml

    ---
    - name: Simple Web Server
      hosts: example.com
      sudo: True
      tasks:
        - name: Install nginx
          apt: pkg=nginx state=installed update_cache=true
          notify: start nginx
        - name: remove default nginx site
          file: path=/etc/nginx/sites-enabled/default state=absent
        - name: Assures project root dir exists
          file: >
            path=/srv/www/example.com
            state=directory
            owner=deploy
            group=www-data
        - name: copy nginx config file
          template: >
            src=templates/nginx.conf.j2
            dest=/etc/nginx/sites-available/example.com
          notify: restart nginx
        - name: enable configuration
          file: >
            dest=/etc/nginx/sites-enabled/example.com
            src=/etc/nginx/sites-available/example.com
            state=link
          notify: restart nginx
      handlers:
        - name: start nginx
          service: name=nginx state=started
        - name: restart nginx
          service: name=nginx state=restarted
    

    In it we make reference to a template called templates/nginx.conf.j2 where we will specify a simple virtual host.

    server {
            listen *:80;
    
            root /srv/www/example.com;
            index index.html index.htm;
    
            server_name example.com;
    
            location / {
                    try_files $uri $uri/ =404;
            }
    }
    

    I’ll show you in another post how to do this same setup but with multiple virtual hosts in case you run multiple sites.

    Run it by calling:

    ansible-playbook simple-webserver.yml
    

    MIDDLEMAN

    Middleman has a very simple way to deploy over rsync. Just make sure you have the following gem in your Gemfile

    gem 'middleman-deploy'
    

    And then add something like this to your config.rb

    activate :deploy do |deploy|
      deploy.method = :rsync
      deploy.host   = 'example.com'
      deploy.path   = '/srv/www/example.com'
      deploy.user  = 'deploy'
    end
    

    Before you can deploy you need to remember to build your site. This is prone to errors so instead we will add a rake task in our Rakefile to do this for us.

    desc 'Build site'
    task :build do
      `middleman build`
    end
    
    desc 'Deploy site'
    task :deploy do
      `middleman deploy`
    end
    
    desc 'Build and deploy site'
    task :build_deploy => [:build, :deploy] do
    end
    

    GIT FLOW

    Technically you don’t really need git flow for this process but I do believe having a proper branching model is key to a successful CD environment. Depending on your team’s process you might want to use something else but if you don’t have anything defined please take a look at git flow, it might be just what you need.

    For this tutorial I will oversimplify the process and just use the develop, master and release branches by following these three steps:

    1. Commit all the desired changes into the develop branch
    2. Create a release and add the release’s information
    3. Merge the release into master

    Let’s go through the steps in the command line. We start by adding the new features and committing them.

    git add Rakefile
    git commit -m 'Add rake task for easier deployment'
    

    Now we create a release.

    git flow release start '1.0.0'
    

    This would be a good time to test everything out. Bump the version number of your software (in my case 1.0.0), update the change log and do any last minute fixes.

    Commit the changes and let’s wrap up this step by finishing our release.

    git flow release finish '1.0.0'
    

    Try to write something significant for your message tag so you can easily refer to a version later on by it’s description.

    git tag -n
    

    Hold your horses and don’t push your changes just yet.

    SEMAPHORE

    Add a new project from Github or Bitbucket.

    For the build you might want to have something along the lines of:

    bundle install --path vendor/bundle
    bundle exec rake spec
    

    Now go into the projects settings inside the Deployment tab and add a server.

    Because we are using a generic option Sempahore will need access to our server. Generate an SSH key and paste the private in Semaphore and the public in your server.

    For the deploy commands you need to have something like this:

    ssh-keyscan -H -p 22 example.com >> ~/.ssh/known_hosts
    bundle exec rake build_deploy
    

    PUSH YOUR CHANGES

    Push your changes in the master branch and voilà, Semaphore will build and deploy your site.

    Once you get into the habit of doing this with your website you will feel more confident of doing it with something like a Rails application.

    If you have any questions please leave them below, I’ll respond to every single one of them.