Mauro Morales

software developer

Author: Mauro

  • Running a Patched Ruby on Heroku

    You use a PaaS because you want all the underlying infrastructure and configuration of your application to be hidden from you. However, there are times when you are forced to look deeper into the stack. In this article I want to share how simple it is to run a patched version of Ruby on Heroku.


    It all started while trying to upgrade Ruby in an application. Unfortunately, every newer version I tried made the application break. After some searching around, I came across a bug report from 3 years ago in Ruby upstream.

    The issue was actually not in Ruby but in Onigmo, the regular expressions library that Ruby uses under the hood. All versions since 2.4 where affected i.e. all supported versions including 2.5.8, 2.6.6 and 2.7.1 at the moment of writing. Lucky for me, Onigmo had been patched upstreambut the patch will only land in Ruby 2.7 later this year.

    This meant that, I was going to have to patch Ruby myself. For local development this is not a big deal, but I wasn’t sure if it was possible to do on Heroku. I remembered from CloudFoundry and Jenkins-X that the part of the platform taking care of the build and installation of the language were the buildpacks, so I decided to investigate about buildpacks on Heroku.


    Heroku’s Ruby buildpack is used to run your application whenever there’s a Gemfile and Gemfile.lock file. From parsing these, it figures out which version of Ruby it’s meant to use.

    Once it knows which version of Ruby to install, it runsbin/support/download_ruby, to download a pre built package and extracts it to be available for execution to your application. As a quick hack, I decided to modify this file to do what I did in my development environment to patch Ruby.

    1. First download the Ruby source code from upstream instead of the pre built version by Heroku.curl --fail --silent --location -o /tmp/ruby-2.6.6.tar.gz tar xzf /tmp/ruby-2.6.6.tar.gz -C /tmp/src cd /tmp/src/ruby-2.6.6
    2. Then apply a patch from a file I placed under bin/support/ (probably not the best place but OK while I was figuring things out).patch < "$BIN_DIR/support/onigmo-fix.diff"
    3. And finally build and install Rubyautoconf ./configure --disable-install-doc --prefix "$RUBY_BOOTSTRAP_DIR" --enable-load-relative --enable-shared make make install

    You can find an unpolished but working version of what I did here


    Now all that is left is to tell your application to use your custom buildpack instead of Heroku’s supported one. You can do this in the command line by running

    heroku buildpacks:set -a myapp

    Or by adding a file called app.json at the root directory of your application sources (not in the buildpack sources). I ended up using this form since I prefer to have as much of the platform configuration in code.

      "environments": {
        "staging": {
          "addons": ["heroku-postgresql:hobby-dev"],
          "buildpacks": [
              "url": ""

    Now every time a deployment is made to this environment, the Ruby application will download the Ruby sources, patch, build and install them.

    This of course is not very optimal since you’ll be wasting a lot of time building Ruby. Instead you should do something similar to what Heroku is doing by pre building the patched version of Ruby, and downloading it from an S3 bucket. {: .notice–warning}


    Using a patched version of Ruby comes with a heavy price tag, the maintenance. You should still apply updates until that patch is fixed upstream (at least security updates). And you also need to use the patched version in all your environments e.g. production, staging, et al. including your CI. Whether all this extra work is worth it, is something you’ll need to analyze. In the cases when the benefits outweigh the costs, it’s great to know that you don’t have to give up all the benefits of a platform like Heroku to run your own version of Ruby.

  • Ruby’s DATA Stream

    The STDIN and ARGF streams are commonly used in Ruby, however there’s also the less popular DATA one. This is how it works and some examples in the wild.


    Like with any other stream you can use gets and readlines. This behaviour is defined by the IO class. However there’s a caveat, your script needs to have a data section. To define it use the __END__ to separate code from data.

    $ cat hello_world.rb
    puts DATA.gets
    hello world!
    $ ruby hello_world.rb
    hello world!

    Look at that, another way to code hello world in Ruby. Without the __END__keyword, you’ll get the following error:

    NameError: uninitialized constant DATA


    You could use the data section of the script if you wanted to keep the data and code really close, or if you wanted to do some sort of pre processing to your sources. But to be honest, the only real benefit I can think of is performance. Instead of starting a second IO operation, to read a file containing the data, it’d get loaded at the same time than the script.


    One thing I’ve learned while working with Go, is to check Go’s source files for good examples. Even though you cannot do this with Ruby at the same degree because the sources are in C, you can still check the parts of the sources that are in Ruby and the gems and tools maintained within the Ruby sources. Here are some examples:

  • Numbered Parameters in Ruby 2.7

    A new feature called “numbered parameters” will see the light of day in the Ruby 2.7 release at the end of the year. What caught my attention was not the feature itself but the mixed reception it got from the community.


    Whenever you open a block you have the chance to pass a list of numbered parameters

    object.method { |parameter_1, parameter_2, ... parameter_n| ... }

    For example if you were iterating over a hash to print its keys with matching values you’d do something like this:

    my_hash.each { |key, value| puts "#{key}: #{value}" }


    With the new numbered parameters you are going to be able to save yourself some keystrokes and use @ followed by the number that represents the position of the parameter that you want do use so our previous code would now look like this:

    my_hash.each { puts "#{@1}: #{@2}" }


    Other languages like Kotlin use it as the default variable name within a block. { println(it) }

    This is not the case with this new feature.

    object.method { p @1 }

    is syntactic sugar for

    object.method { |parameter_1,| p parameter } 

    and not for

    object.method { |parameter| p parameter } 

    So pay attention to the dataset you are passing because you might get some unexpected behaviour like this one:

    [1, ['a', 'b'], 3, {foo: "bar"}].map { @1 }
    => [1, "a", 3, {:foo=>"bar"}]

    As you can see 1 and 3 are taken as the first numbered parameter as expected. Each element of the array becomes one of the numbered parameters so @1 => 'a', @2 => 'b'. And the hash is treated as a single object so it won’t get split either.

    This shouldn’t come as a surprise since it’s the expected behaviour of doing

    [1, ['a', 'b'], 3, {foo: "bar"}].map { |x,| x }

    but in this case we make it clear to the reader when we say |x,|. There is no plan to make it a default variable name which is weird because that’s exactly what was requested in the original issue.


    As I already mentioned this is what the person who requested the issue wanted to have but it was not accepted in its original form because of backwards compatibility. Introducing new keywords to the Ruby language is a no-go at the moment because Matz is not a fan of breaking developers’ old code with newer versions of Ruby.

    I appreciate that Matz takes such a strong stance on this matter, I think it’s important to update your code bases to use the latest version of Ruby but the harder it is to make an update, the less likely it is that you’ll end up doing it. So if I update to Ruby 2.7 and I start seeing breaking changes everywhere in my code base I’m just going to put it on hold for as long as possible. Instead this experience should be a welcoming one.


    I don’t know how many times you pass a list of parameters to a block versus how many times you pass a single parameter, but I’m pretty sure in every code base you can find many more instances of the latter than the former. So the question is: How valuable is this new feature?

    Nobody seems to like the fact that numbered parameters start with @ and some community members are also saying that developers could get confused thinking that the numbered parameters are instance variables.

    There is currently an open issue requesting to reconsider numbered parameters because in it’s current state it brings more pain than value. What do you think? Do you like numbered parameters? Do you think they should be implemented in a different way? Would you rather not have them at all? There’s some informal voting happening in case you want to chip in.

  • Installing openSUSE Tumbleweed On the Dell XPS 13

    This post will show you how install openSUSE’s rolling release, Tumbleweed, on the Dell XPS 13 9350 FHD.

    Update 2016–06–30: Bios 1.4.4 is out.

    Update 2016–06–22: The kernel flag is not needed anymore since kernel 4.6 which was introduced around Tumbleweed version 20160612.

    Update 2016–05–04: Added a section to fix the sound issues when using headphones.


    1. Create a recovery USB in case you want return the machine to it’s original state.
    2. Get yourself a copy of openSUSE Tumbleweed.
    3. Create a bootable USB. There are instructions for the Linux, Windows and OS X.


    Warning: Do not reboot the machine when the BIOS update is running!

    1. Download the latest BIOS update (1.3.3 at the time of writing).
    2. Save it under /boot/EFI.
    3. Reboot the machine.
    4. Press F12 and select BIOS update.


    1. Reboot the machine.
    2. Press F12 and configure to use Legacy BIOS and reboot.
    3. Boot from the Tumbleweed USB key and follow the installer instructions until you get to the partitioning stage.
    4. Remove all partitions and create an MSDOS partition table.
    5. Add your desired partitions inside the just created partition table. In my case I have a root, a swap and a home partition.
    6. Finish the installation process.


    Note: This issue was fixed on kernel 4.6, here is the bugzilla link.

    There is a reported issue that causes your screen to flicker. Until the fix gets merged into the kernel you can do this hack:

    1. Inside /etc/default/grub add the kernel flag i915.enable_rc6=0
    2. grub2-mkconfig -o /boot/grub2/grub.cfg
    3. Restart your machine.


    When using headphones you will notice a high pitch when no sound is being played and a loud cracking sound when starting/stopping sound from an application.

    First fix the issue with the high pitch by setting the microphone boost volume.

    amixer -c 0 cset 'numid=10' 1

    To fix the problem with the cracking sound the only fix that I’ve found so far is to disable the SOUND_POWER_SAVE_ON_BAT option on tlp.

    augtool set /files/etc/default/tlp/SOUND_POWER_SAVE_ON_BAT 0

    You will need to reapply the battery settings for changes to take effect and set it up to be started at boot time.

    systemctl enable tpl.service --now

    Have a lot of fun…

  • Running Multiple Redis Instances

    This article will teach you how to run one or more Redis instances on a Linux server using systemd to spawn copies of a service.


    The easiest way to install Redis in Linux is with your distributions package manager. Here is how you would do it on openSUSE:

    sudo zypper install redis

    In case your distribution doesn’t provide a Redis package, you can always follow the upstream instructions to compile it from scratch.


    1. Make a copy of the example/default file that is provided by the packagecd /etc/redis/ cp default.conf.example my_app.conf Use a name that will help you recognize the purpose of the instance. For example if each instance will be mapped to a different application give it the name of the application. If each instance will be mapped to the same application use the port in which it will be running.
    2. Change the ownership of the newly created configuration file to user “root” and group “redis”chown root.redis my_app.conf
    3. ConfigurationAdd a “pidfile”, a “logfile” and a “dir” to the .conf file.pidfile /var/run/redis/ logfile /var/log/redis/my_app.log dir /var/lib/redis/my_app/ Each of these attributes has to match with the name of the configuration file without the extension.Make sure the “daemonize” option is set to “no” (this is the default value). If you set this option to yes Redis and systemd will interfere with each other when spawning the processes.daemonize no Define a “port” number and remember that each instance should be running on a different port.port 6379
    4. Create the database directory at the location given in the configuration fileinstall -d -o redis -g redis -m 0750 /var/lib/redis/my_app The database directory has to be owned by user “redis” and group “redis” and with permissions 750.

    Repeat these steps for every instance you want to set up. In my case I set up a second instance called “my_other_app”

    ├── default.conf.example
    ├── my_app.conf
    └── my_other_app.conf


    In order for systemd to know how to enable and start each instance individually you will need to add a service unit inside the system configuration directory located at /etc/systemd/system. For convenience you might also want to start/stop all instances at once. For that you will need to add a target unit.

    In case you installed Redis on openSUSE these two files will be already provided for you under the system unit directory /usr/lib/systemd/system.

    1. Create the service unit file “redis@.service” with the following contents:[Unit] Description=Redis[Service] Type=simple User=redis Group=redis PrivateTmp=true PIDFile=/var/run/redis/ ExecStart=/usr/sbin/redis-server /etc/redis/%i.conf Restart=on-failure[Install] The unit file is separated in sections. Each section consists of variables and the value assigned to them. In this example:
      • After: when the Redis instance is enabled it will get started only after the network has been started.
      • PartOf: this instance belongs to the and will get started/stopped as part of that group.
      • Type: simple means the service process doesn’t fork.
      • %i: a specifier that is expanded by systemd to the “my_app” instance.
    2. Create the target unit file “” with the following contents:[Unit] Description=Redis target allowing to start/stop all redis@.service instances at once


    If everything went as expected you should be able to interact with the individual instances:

    systemctl start redis@my_app
    systemctl enable redis@my_other_app

    And also with all the instances at the same time:

    systemctl restart
    systemctl stop


    If things didn’t go as expected and you cannot start the instance make sure to check the instance’s status:

    systemctl status redis@my_app

    If the issue doesn’t show up there then check systemd’s journal:

    journalctl -u redis@my_app

    For example if you forgot to give the right permissions to the configuration file you’d see something like this inside the journal:

    Apr 23 10:02:53 mxps redis-server[26966]: 26966:C 23 Apr 10:02:53.917
    # Fatal error, can’t open config file ‘/etc/redis/my_app.conf’


    • Thanks to the openSUSE Redis package maintainers for creating such a nice package that you can learn from it.
    • The book How Linux Works provided the details on how systemd instances work.
  • Profiling Vim

    I like Vim because it’s very fast. Unfortunately the other day I found myself opening a diff file that took forever to load. The file had 58187 (this number will be important later on) lines in it but I never thought Vim would choke with something that was less than 2M size.

    This post was originally published on medium


    If you find yourself in a similar situation this is what you can do in order to find out what is causing Vim to slow down.

    1. Open Vim and start profiling:profile start /tmp/profile.log :profile func * :profile file * This is telling Vim to save the results of the profile into/tmp/profile.log and to run the profile for every file and function.Note: The profile.log file only gets generated until you close Vim.
    2. Do the action that is taking a long time to run (in my case opening the diff file):edit /tmp/file.diff :profile pause :q!
    3. Analyze the dataThere is a lot of information in /tmp/profile.log but you can start by focusing on the Total time. In my case there was a clear offender with a total time of more than 14 seconds! And it looked like this:FUNCTION <SNR>24_PreviewColorInLine() Called 58187 times Total time: 14.430544 Self time: 2.961442 Remember the number of lines in the file I mentioned before? For me it was interesting to see that the function gets called just as many times.
    4. Pinpoint the offenderFinding out where a function is defined is very easy thanks to the <SNR> tag and the number right after it. You simply need to run :scriptnames and scroll until you find the index number you are looking for, in my case 24.24: ~/.vim/bundle/colorizer/autoload/colorizer.vim

    I opened up a GitHub issue to make the developers of the plugin aware but it seems as if the project has been left unmaintained so I decided to remove it from my vimrc file.

  • Yes, Ship It!

    Last week I had the chance to participate in my first Hackweek. I never had such an experience in any other company I’ve ever worked for and between my colleagues’ reports about previous experiences and my own expectations I was very excited to see what was all the fuzz about.

    These type of events are not unique to SUSE, as a matter of fact Twitter and a bunch of other companies were also having their Hackweeks at the same time and I’m glad this is the case because after having the chance to participate in one I realize it’s a great way to promote creativity.

    A hackweek is basically a week were you get to work on anything you want to work on. You are not expected to deliver anything but instead encouraged to experiment and explore with anything you think is worth spending time on.

    In order to make the most out of Hackweek I decided to join a project and not start one of my own so I could do some pairing. This kind of interactions always make it a lot of fun for me plus I get to learn a ton. That’s how I joined Cornelius Schumacher to work on Yes Ship It! This is a project he had already started on his own so we were not doing everything from scratch.

    The approach of yes_ship_it is different from the typical release script. It doesn’t define a series of steps which are executed to make a release. It defines a sequence of assertions about the release, which then are checked and enforced.

    The first thing we decided to do together was a Rails App which allows you to track successful software releases. Since it was going to be 100% related to Yes Ship It! we decided to call it Yes It Shipped!. Let me show you how trivial it is to add it to a project like the formstack-api gem.

    1. Install the yes_ship_it gem
    $ gem install yes_ship_it
    1. Add a yes_ship_it.conf file
    $ yes_ship_it init
    1. Release!
    $ yes_ship_it

    By default yes_ship_it will check if:

    • you are in the right release branch (by default master) and the code was pushed.
    • the working directory is not missing to commit anything.
    • the version was update
    • the changelog was updated
    • a tag was added and published
    • a new version of the gem was built and published

    The aim is to make it as generic as possible so you can adapt it to anyproject you have. For starters you can remmove any check in the process and soon enough you will be able to add checks of your own.

    What I like the most about it is that I can run yes_ship_it at any time. I don’t need to remember or make sure what was the last step I did because that’s exactly what it will do for me.

    What do you think? Leave your comments below and remember to release early and release often!

  • Running openSUSE 13.2 on Linode

    Linode is one of my favorite VPS providers out there. One of the reasons why I like them is because they make it extremely easy to run openSUSE. This post is a quick tutorial on how to get you started.

    The first time you log in you will be presented with the different Linode plans. And the Location where your server will reside.

    I’ll choose the smallest plan

    Once you see your Linode listed, click on it’s name

    Now click on “Deploy an image”

    In there we will select openSUSE 13.2, the amount of space in disk. You can leave the defaults which will choose for the full disk size with a 256MB swap partition. Choose your password and click Deploy.

    This will take a bit but as soon as it’s done you will be able to Boot your machine.

    Finally click on the “Remote Access” tab so you can see different options to log into your machine.

    I personally like to ssh in from my favorite terminal app

    ssh root@

    You will be welcomed by openSUSE with the following message:

    Have a lot of fun...
    linux:~ #

    Now you can play with your new openSUSE 13.2 box. Enjoy!

  • Getting Started With Continuous Delivery

    More and more companies are requiring developers to understand Continuous Integration and Continuous Delivery but starting to implement it in your projects can be a bit overwhelming. Start with a simple website and soon enough you will feel more confident to do with more complex projects.


    TDD/BDD, CI/CD, XP, Agile, Scrum …. Ahhhhh, leave me alone I just want to code!

    Yes, all this methodologies can be a bit complicated at first, but simply because you are not used to them. Like a muscle you need to train them and the more you do so, the sooner you won’t feel like doing them is a total waste of time.

    Once you have made up your mind that CD is for you, your team or your project then you will need to define a process and follow it. Don’t make it easy to break the process and before you know it you and your team will feel like fish in the water.


    There are many ways you can solve this problem. I will use a certain stack. If you don’t have experience with any of the tools, try to implement it with one you do have experience with.

    VPSDigitalOceanLinode or Vagrant
    Configuration ManagementAnsibleChef or Puppet
    Static site generatorMiddlemanJekyll or pure HTML
    CI/CD ServerSemaphoreCodeship or Jenkins

    The first thing is to create a new droplet in DO (you could also do this with Ansible but we won’t at this tutorial). Make sure there is a deployuser and to set up ssh keys for it (again something we could do with Ansible but we’ll leave that for another post) Setup your your domain to point to the new server’s IP address, I will use ‘’.


    Create a folder for your playbook and inside of it start with a file calledansible.cfg. There we will override the default configuration by pointing to a new inventory inside your playbook’s folder and specify the deploy user.


    Now in our inventory file we specify a group called web and include our domain.


    Our tasks will be defined in simple-webserver.yml

    - name: Simple Web Server
      sudo: True
        - name: Install nginx
          apt: pkg=nginx state=installed update_cache=true
          notify: start nginx
        - name: remove default nginx site
          file: path=/etc/nginx/sites-enabled/default state=absent
        - name: Assures project root dir exists
          file: >
        - name: copy nginx config file
          template: >
          notify: restart nginx
        - name: enable configuration
          file: >
          notify: restart nginx
        - name: start nginx
          service: name=nginx state=started
        - name: restart nginx
          service: name=nginx state=restarted

    In it we make reference to a template called templates/nginx.conf.j2 where we will specify a simple virtual host.

    server {
            listen *:80;
            root /srv/www/;
            index index.html index.htm;
            location / {
                    try_files $uri $uri/ =404;

    I’ll show you in another post how to do this same setup but with multiple virtual hosts in case you run multiple sites.

    Run it by calling:

    ansible-playbook simple-webserver.yml


    Middleman has a very simple way to deploy over rsync. Just make sure you have the following gem in your Gemfile

    gem 'middleman-deploy'

    And then add something like this to your config.rb

    activate :deploy do |deploy|
      deploy.method = :rsync   = ''
      deploy.path   = '/srv/www/'
      deploy.user  = 'deploy'

    Before you can deploy you need to remember to build your site. This is prone to errors so instead we will add a rake task in our Rakefile to do this for us.

    desc 'Build site'
    task :build do
      `middleman build`
    desc 'Deploy site'
    task :deploy do
      `middleman deploy`
    desc 'Build and deploy site'
    task :build_deploy => [:build, :deploy] do


    Technically you don’t really need git flow for this process but I do believe having a proper branching model is key to a successful CD environment. Depending on your team’s process you might want to use something else but if you don’t have anything defined please take a look at git flow, it might be just what you need.

    For this tutorial I will oversimplify the process and just use the develop, master and release branches by following these three steps:

    1. Commit all the desired changes into the develop branch
    2. Create a release and add the release’s information
    3. Merge the release into master

    Let’s go through the steps in the command line. We start by adding the new features and committing them.

    git add Rakefile
    git commit -m 'Add rake task for easier deployment'

    Now we create a release.

    git flow release start '1.0.0'

    This would be a good time to test everything out. Bump the version number of your software (in my case 1.0.0), update the change log and do any last minute fixes.

    Commit the changes and let’s wrap up this step by finishing our release.

    git flow release finish '1.0.0'

    Try to write something significant for your message tag so you can easily refer to a version later on by it’s description.

    git tag -n

    Hold your horses and don’t push your changes just yet.


    Add a new project from Github or Bitbucket.

    For the build you might want to have something along the lines of:

    bundle install --path vendor/bundle
    bundle exec rake spec

    Now go into the projects settings inside the Deployment tab and add a server.

    Because we are using a generic option Sempahore will need access to our server. Generate an SSH key and paste the private in Semaphore and the public in your server.

    For the deploy commands you need to have something like this:

    ssh-keyscan -H -p 22 >> ~/.ssh/known_hosts
    bundle exec rake build_deploy


    Push your changes in the master branch and voilà, Semaphore will build and deploy your site.

    Once you get into the habit of doing this with your website you will feel more confident of doing it with something like a Rails application.

    If you have any questions please leave them below, I’ll respond to every single one of them.

  • Installing SQL Developer on Ubuntu 9.04

    One of the mayor reasons why I still use my Windows box is because I havent found a subtitute for TOAD. I know I could make it work some how using wine but I just didn’t feel like it. Since Oracle is so Linux supportive I looked for something on their website and for my surprise I found SQLdeveloper. So far, so good! I like it and I am going to start using it for work. Here are the steps I followed to make it work in my Ubuntu 9.04 box:

    1. Install Java JDK sudo apt-get install sun-java6-jdk
    2. Download Oracle SQL Developer for other platforms from Oracle’s website.
    3. Unzipped the package in my /home/{user}/Programs/sqldeveloper
    4. Run the .shsudo sh /home/{user}/Programs/sqldeveloper/
    5. When asked for my Java path wrote the following (be sure about your java version):/usr/lib/jvm/java-6-sun-
    6. Enjoy!

    Since I enjoy launching commands from my Applications menu this is what I did:

    1. System > Preferences > Main Menu
    2. Go to the Programming tab
    3. New Item
    4. Name: SQLdeveloper
    5. Command: sh /home/{user}/Programs/sqldeveloper/
    6. OK

    Now I can go to my Applications > Programming and click on my SQLdeveloper icon.

    If you have any questions please comment about it or feel free to contact me.

    This post was originally published on my Tumblr blog