Mauro Morales

software developer

Author: Mauro

  • Vortex Core Mechanical Keyboard Review

    I got myself a new keyboard for my birthday, the Vortex Core. I wanted a mechanical keyboard that I could take everywhere with me. Being a 40% keyboard, I expected it to over deliver on the portable side, what I didn’t expect, is that I’d enjoy using this tiny keyboard so much, even for extended periods of time.

    SPECS

    • Four layers, from which three of them are programmable without having to flash the devise
    • Cherry MX switches. I got mine with silent red ones
    • DSA Profile keycaps
    • RGB LEDs (also programmable)
    • ANSI layout (for the most part)
    • Aluminum case with 4 rubber feet
    • Micro USB connector

    The quality of the printing is great and I really appreciate having the side prints to be color coded depending on the function key. This is necessary to program the other layouts but even if it wasn’t, I wish more keyboard manufacturers would do it.

    COMPARISON SHOTS

    Vortex Core compared to Das Keyboard 4 Ultimate

    Vortex Core compared to Macbook Pro 13″

    Vortex Core compared to Magic Trackpad

    MY PERSONAL CUSTOMIZATIONS

    I really like how this keyboard looks and feels from but there were two changes I made to make it perfect for me:

    1. Programmed layer 2 so I could access all numbers and symbols plus arrow keys via the Fn key or a combination of Fn+Shift. Edit this layout If you want to know more about how to program the Vortex Core, check out this blog post
    2. Switched the left Space Bar, with a Vim keycap I bought fromVimcaps. The Vim green color, fits perfectly with the beige and gray from the other keys.

    GOTCHAS

    I configure my OS to switch Caps Lock for another Ctrl for easy access, but as you can notice from the layout, the physical Caps Lock is missing. At first I was considering to reprogram another key to be Ctrl because I find the position of Ctrl very inaccessible. However, I noticed that I can easily press the Ctrl key using my palms and I ended up liking this better. So much so, that I also adopted this while using my Ergodox EZ.

    ALTERNATIVES

    The Vortex wasn’t really my first option for a 40%. I had my eyes on a Planck EZ because I’m very pleased with the quality of the Ergodox EZ by the same company. I ended up picking the Vortex because (a) I liked the retro look better, (b) it was about 60 EUR cheaper and (c) I could get it from a local shop here in Belgium.

    FINAL THOUGHTS AND RECOMMENDATIONS

    Overall, I’m very happy with this keyboard. I enjoy using it for my everyday writing, no matter if it’s code and prose. It’s a great addition to my keyboard fleet and I’ll keep using it on a daily basis. The only thing I’d change, is the cable that comes with it. Everything in this unit has been built with a very high standard and an average USB cable doesn’t do it justice but if you don’t mind this so much or you don’t mind spending extra on a nice cable then you won’t be disappointed. So, if you’re on the market for a well built, good looking, portable and programmable keyboard, you should consider the Vortex Core.

    Having said that, I wouldn’t recommend the Vortex Core to someone who’s looking to buy their first mechanical keyboard. Instead, try to go with something a bit bigger first so you can get an idea about what you like and don’t about mechanical keyboards before buying something as extreme as a 40%. A good option could be a Das Keyboard 4. It isn’t programmable but the quality is great and I really like the dedicated media keys.

    For those who already have experienced a mechanical keyboard and are considering the Vortex Core, remember that getting used to a new keyboard layout takes time. The great thing about this keyboard is that it’s programmable so you can make that process less annoying by changing the default layout to something you feel more comfortable with. I think it’s better to use something that feels natural so you find yourself coming back to your keyboard over and over again, than something which you might think is the ultimate layout. Little by little you can introduce minor modifications that you can adapt to easily. I’ve been re-programming my Ergodox EZ for the past 4 years and I’ll probably continue doing so in the years to come.

  • Running a Patched Ruby on Heroku

    You use a PaaS because you want all the underlying infrastructure and configuration of your application to be hidden from you. However, there are times when you are forced to look deeper into the stack. In this article I want to share how simple it is to run a patched version of Ruby on Heroku.

    CONTEXT

    It all started while trying to upgrade Ruby in an application. Unfortunately, every newer version I tried made the application break. After some searching around, I came across a bug report from 3 years ago in Ruby upstream.

    The issue was actually not in Ruby but in Onigmo, the regular expressions library that Ruby uses under the hood. All versions since 2.4 where affected i.e. all supported versions including 2.5.8, 2.6.6 and 2.7.1 at the moment of writing. Lucky for me, Onigmo had been patched upstreambut the patch will only land in Ruby 2.7 later this year.

    This meant that, I was going to have to patch Ruby myself. For local development this is not a big deal, but I wasn’t sure if it was possible to do on Heroku. I remembered from CloudFoundry and Jenkins-X that the part of the platform taking care of the build and installation of the language were the buildpacks, so I decided to investigate about buildpacks on Heroku.

    HEROKU’S RUBY BUILDPACK

    Heroku’s Ruby buildpack is used to run your application whenever there’s a Gemfile and Gemfile.lock file. From parsing these, it figures out which version of Ruby it’s meant to use.

    Once it knows which version of Ruby to install, it runsbin/support/download_ruby, to download a pre built package and extracts it to be available for execution to your application. As a quick hack, I decided to modify this file to do what I did in my development environment to patch Ruby.

    1. First download the Ruby source code from upstream instead of the pre built version by Heroku.curl --fail --silent --location -o /tmp/ruby-2.6.6.tar.gz https://cache.ruby-lang.org/pub/ruby/2.6/ruby-2.6.6.tar.gz tar xzf /tmp/ruby-2.6.6.tar.gz -C /tmp/src cd /tmp/src/ruby-2.6.6
    2. Then apply a patch from a file I placed under bin/support/ (probably not the best place but OK while I was figuring things out).patch < "$BIN_DIR/support/onigmo-fix.diff"
    3. And finally build and install Rubyautoconf ./configure --disable-install-doc --prefix "$RUBY_BOOTSTRAP_DIR" --enable-load-relative --enable-shared make make install

    You can find an unpolished but working version of what I did here

    USING THE BUILDPACK IN YOUR APPLICATION

    Now all that is left is to tell your application to use your custom buildpack instead of Heroku’s supported one. You can do this in the command line by running

    heroku buildpacks:set https://github.com/some/buildpack.git -a myapp

    Or by adding a file called app.json at the root directory of your application sources (not in the buildpack sources). I ended up using this form since I prefer to have as much of the platform configuration in code.

    {
      "environments": {
        "staging": {
          "addons": ["heroku-postgresql:hobby-dev"],
          "buildpacks": [
            {
              "url": "https://github.com/some/buildpack.git"
            }
          ]
        }
      }
    }

    Now every time a deployment is made to this environment, the Ruby application will download the Ruby sources, patch, build and install them.

    This of course is not very optimal since you’ll be wasting a lot of time building Ruby. Instead you should do something similar to what Heroku is doing by pre building the patched version of Ruby, and downloading it from an S3 bucket. {: .notice–warning}

    CONCLUSION

    Using a patched version of Ruby comes with a heavy price tag, the maintenance. You should still apply updates until that patch is fixed upstream (at least security updates). And you also need to use the patched version in all your environments e.g. production, staging, et al. including your CI. Whether all this extra work is worth it, is something you’ll need to analyze. In the cases when the benefits outweigh the costs, it’s great to know that you don’t have to give up all the benefits of a platform like Heroku to run your own version of Ruby.

  • Ruby’s DATA Stream

    The STDIN and ARGF streams are commonly used in Ruby, however there’s also the less popular DATA one. This is how it works and some examples in the wild.

    HOW TO READ FROM DATA?

    Like with any other stream you can use gets and readlines. This behaviour is defined by the IO class. However there’s a caveat, your script needs to have a data section. To define it use the __END__ to separate code from data.

    $ cat hello_world.rb
    puts DATA.gets
    __END__
    hello world!
    
    $ ruby hello_world.rb
    hello world!

    Look at that, another way to code hello world in Ruby. Without the __END__keyword, you’ll get the following error:

    NameError: uninitialized constant DATA

    WHEN TO USE IT?

    You could use the data section of the script if you wanted to keep the data and code really close, or if you wanted to do some sort of pre processing to your sources. But to be honest, the only real benefit I can think of is performance. Instead of starting a second IO operation, to read a file containing the data, it’d get loaded at the same time than the script.

    EXAMPLES

    One thing I’ve learned while working with Go, is to check Go’s source files for good examples. Even though you cannot do this with Ruby at the same degree because the sources are in C, you can still check the parts of the sources that are in Ruby and the gems and tools maintained within the Ruby sources. Here are some examples:

  • Numbered Parameters in Ruby 2.7

    A new feature called “numbered parameters” will see the light of day in the Ruby 2.7 release at the end of the year. What caught my attention was not the feature itself but the mixed reception it got from the community.

    BLOCK PARAMETERS

    Whenever you open a block you have the chance to pass a list of numbered parameters

    object.method { |parameter_1, parameter_2, ... parameter_n| ... }

    For example if you were iterating over a hash to print its keys with matching values you’d do something like this:

    my_hash.each { |key, value| puts "#{key}: #{value}" }

    NUMBERED PARAMETERS

    With the new numbered parameters you are going to be able to save yourself some keystrokes and use @ followed by the number that represents the position of the parameter that you want do use so our previous code would now look like this:

    my_hash.each { puts "#{@1}: #{@2}" }

    NO DEFAULT VARIABLE NAME

    Other languages like Kotlin use it as the default variable name within a block.

    collection.map { println(it) }

    This is not the case with this new feature.

    object.method { p @1 }

    is syntactic sugar for

    object.method { |parameter_1,| p parameter } 

    and not for

    object.method { |parameter| p parameter } 

    So pay attention to the dataset you are passing because you might get some unexpected behaviour like this one:

    [1, ['a', 'b'], 3, {foo: "bar"}].map { @1 }
    => [1, "a", 3, {:foo=>"bar"}]

    As you can see 1 and 3 are taken as the first numbered parameter as expected. Each element of the array becomes one of the numbered parameters so @1 => 'a', @2 => 'b'. And the hash is treated as a single object so it won’t get split either.

    This shouldn’t come as a surprise since it’s the expected behaviour of doing

    [1, ['a', 'b'], 3, {foo: "bar"}].map { |x,| x }

    but in this case we make it clear to the reader when we say |x,|. There is no plan to make it a default variable name which is weird because that’s exactly what was requested in the original issue.

    BACKWARDS COMPATIBILITY IS A HIGH PRIORITY

    As I already mentioned this is what the person who requested the issue wanted to have but it was not accepted in its original form because of backwards compatibility. Introducing new keywords to the Ruby language is a no-go at the moment because Matz is not a fan of breaking developers’ old code with newer versions of Ruby.

    I appreciate that Matz takes such a strong stance on this matter, I think it’s important to update your code bases to use the latest version of Ruby but the harder it is to make an update, the less likely it is that you’ll end up doing it. So if I update to Ruby 2.7 and I start seeing breaking changes everywhere in my code base I’m just going to put it on hold for as long as possible. Instead this experience should be a welcoming one.

    PAIN OR GAIN?

    I don’t know how many times you pass a list of parameters to a block versus how many times you pass a single parameter, but I’m pretty sure in every code base you can find many more instances of the latter than the former. So the question is: How valuable is this new feature?

    Nobody seems to like the fact that numbered parameters start with @ and some community members are also saying that developers could get confused thinking that the numbered parameters are instance variables.

    There is currently an open issue requesting to reconsider numbered parameters because in it’s current state it brings more pain than value. What do you think? Do you like numbered parameters? Do you think they should be implemented in a different way? Would you rather not have them at all? There’s some informal voting happening in case you want to chip in.

  • Installing openSUSE Tumbleweed On the Dell XPS 13

    This post will show you how install openSUSE’s rolling release, Tumbleweed, on the Dell XPS 13 9350 FHD.

    Update 2016–06–30: Bios 1.4.4 is out.

    Update 2016–06–22: The kernel flag is not needed anymore since kernel 4.6 which was introduced around Tumbleweed version 20160612.

    Update 2016–05–04: Added a section to fix the sound issues when using headphones.

    PREPARATION

    1. Create a recovery USB in case you want return the machine to it’s original state.
    2. Get yourself a copy of openSUSE Tumbleweed.
    3. Create a bootable USB. There are instructions for the Linux, Windows and OS X.

    UPDATE THE BIOS

    Warning: Do not reboot the machine when the BIOS update is running!

    1. Download the latest BIOS update (1.3.3 at the time of writing).
    2. Save it under /boot/EFI.
    3. Reboot the machine.
    4. Press F12 and select BIOS update.

    INSTALLATION

    1. Reboot the machine.
    2. Press F12 and configure to use Legacy BIOS and reboot.
    3. Boot from the Tumbleweed USB key and follow the installer instructions until you get to the partitioning stage.
    4. Remove all partitions and create an MSDOS partition table.
    5. Add your desired partitions inside the just created partition table. In my case I have a root, a swap and a home partition.
    6. Finish the installation process.

    FIXING THE FLICKERING DISPLAY

    Note: This issue was fixed on kernel 4.6, here is the bugzilla link.

    There is a reported issue that causes your screen to flicker. Until the fix gets merged into the kernel you can do this hack:

    1. Inside /etc/default/grub add the kernel flag i915.enable_rc6=0
    2. grub2-mkconfig -o /boot/grub2/grub.cfg
    3. Restart your machine.

    FIXING THE SOUND WHEN USING HEADPHONES

    When using headphones you will notice a high pitch when no sound is being played and a loud cracking sound when starting/stopping sound from an application.

    First fix the issue with the high pitch by setting the microphone boost volume.

    amixer -c 0 cset 'numid=10' 1

    To fix the problem with the cracking sound the only fix that I’ve found so far is to disable the SOUND_POWER_SAVE_ON_BAT option on tlp.

    augtool set /files/etc/default/tlp/SOUND_POWER_SAVE_ON_BAT 0

    You will need to reapply the battery settings for changes to take effect and set it up to be started at boot time.

    systemctl enable tpl.service --now

    Have a lot of fun…

  • Running Multiple Redis Instances

    This article will teach you how to run one or more Redis instances on a Linux server using systemd to spawn copies of a service.

    INSTALLING REDIS

    The easiest way to install Redis in Linux is with your distributions package manager. Here is how you would do it on openSUSE:

    sudo zypper install redis

    In case your distribution doesn’t provide a Redis package, you can always follow the upstream instructions to compile it from scratch.

    CONFIGURING A REDIS INSTANCE

    1. Make a copy of the example/default file that is provided by the packagecd /etc/redis/ cp default.conf.example my_app.conf Use a name that will help you recognize the purpose of the instance. For example if each instance will be mapped to a different application give it the name of the application. If each instance will be mapped to the same application use the port in which it will be running.
    2. Change the ownership of the newly created configuration file to user “root” and group “redis”chown root.redis my_app.conf
    3. ConfigurationAdd a “pidfile”, a “logfile” and a “dir” to the .conf file.pidfile /var/run/redis/my_app.pid logfile /var/log/redis/my_app.log dir /var/lib/redis/my_app/ Each of these attributes has to match with the name of the configuration file without the extension.Make sure the “daemonize” option is set to “no” (this is the default value). If you set this option to yes Redis and systemd will interfere with each other when spawning the processes.daemonize no Define a “port” number and remember that each instance should be running on a different port.port 6379
    4. Create the database directory at the location given in the configuration fileinstall -d -o redis -g redis -m 0750 /var/lib/redis/my_app The database directory has to be owned by user “redis” and group “redis” and with permissions 750.

    Repeat these steps for every instance you want to set up. In my case I set up a second instance called “my_other_app”

    .
    ├── default.conf.example
    ├── my_app.conf
    └── my_other_app.conf

    ADDING UNITS TO SYSTEMD FOR THE REDIS SERVICE

    In order for systemd to know how to enable and start each instance individually you will need to add a service unit inside the system configuration directory located at /etc/systemd/system. For convenience you might also want to start/stop all instances at once. For that you will need to add a target unit.

    In case you installed Redis on openSUSE these two files will be already provided for you under the system unit directory /usr/lib/systemd/system.

    1. Create the service unit file “redis@.service” with the following contents:[Unit] Description=Redis After=network.target PartOf=redis.target[Service] Type=simple User=redis Group=redis PrivateTmp=true PIDFile=/var/run/redis/%i.pid ExecStart=/usr/sbin/redis-server /etc/redis/%i.conf Restart=on-failure[Install] WantedBy=multi-user.target redis.target The unit file is separated in sections. Each section consists of variables and the value assigned to them. In this example:
      • After: when the Redis instance is enabled it will get started only after the network has been started.
      • PartOf: this instance belongs to the redis.target and will get started/stopped as part of that group.
      • Type: simple means the service process doesn’t fork.
      • %i: a specifier that is expanded by systemd to the “my_app” instance.
    2. Create the target unit file “redis.target” with the following contents:[Unit] Description=Redis target allowing to start/stop all redis@.service instances at once

    INTERACTING WITH REDIS

    If everything went as expected you should be able to interact with the individual instances:

    systemctl start redis@my_app
    systemctl enable redis@my_other_app

    And also with all the instances at the same time:

    systemctl restart redis.target
    systemctl stop redis.target

    TROUBLESHOOTING

    If things didn’t go as expected and you cannot start the instance make sure to check the instance’s status:

    systemctl status redis@my_app

    If the issue doesn’t show up there then check systemd’s journal:

    journalctl -u redis@my_app

    For example if you forgot to give the right permissions to the configuration file you’d see something like this inside the journal:

    Apr 23 10:02:53 mxps redis-server[26966]: 26966:C 23 Apr 10:02:53.917
    # Fatal error, can’t open config file ‘/etc/redis/my_app.conf’

    ACKNOWLEDGMENTS

    • Thanks to the openSUSE Redis package maintainers for creating such a nice package that you can learn from it.
    • The book How Linux Works provided the details on how systemd instances work.
  • Profiling Vim

    I like Vim because it’s very fast. Unfortunately the other day I found myself opening a diff file that took forever to load. The file had 58187 (this number will be important later on) lines in it but I never thought Vim would choke with something that was less than 2M size.

    This post was originally published on medium

    FINDING OUT WHICH PLUGIN IS MAKING VIM SLOW

    If you find yourself in a similar situation this is what you can do in order to find out what is causing Vim to slow down.

    1. Open Vim and start profiling:profile start /tmp/profile.log :profile func * :profile file * This is telling Vim to save the results of the profile into/tmp/profile.log and to run the profile for every file and function.Note: The profile.log file only gets generated until you close Vim.
    2. Do the action that is taking a long time to run (in my case opening the diff file):edit /tmp/file.diff :profile pause :q!
    3. Analyze the dataThere is a lot of information in /tmp/profile.log but you can start by focusing on the Total time. In my case there was a clear offender with a total time of more than 14 seconds! And it looked like this:FUNCTION <SNR>24_PreviewColorInLine() Called 58187 times Total time: 14.430544 Self time: 2.961442 Remember the number of lines in the file I mentioned before? For me it was interesting to see that the function gets called just as many times.
    4. Pinpoint the offenderFinding out where a function is defined is very easy thanks to the <SNR> tag and the number right after it. You simply need to run :scriptnames and scroll until you find the index number you are looking for, in my case 24.24: ~/.vim/bundle/colorizer/autoload/colorizer.vim

    I opened up a GitHub issue to make the developers of the plugin aware but it seems as if the project has been left unmaintained so I decided to remove it from my vimrc file.

  • Yes, Ship It!

    Last week I had the chance to participate in my first Hackweek. I never had such an experience in any other company I’ve ever worked for and between my colleagues’ reports about previous experiences and my own expectations I was very excited to see what was all the fuzz about.

    These type of events are not unique to SUSE, as a matter of fact Twitter and a bunch of other companies were also having their Hackweeks at the same time and I’m glad this is the case because after having the chance to participate in one I realize it’s a great way to promote creativity.

    A hackweek is basically a week were you get to work on anything you want to work on. You are not expected to deliver anything but instead encouraged to experiment and explore with anything you think is worth spending time on.

    In order to make the most out of Hackweek I decided to join a project and not start one of my own so I could do some pairing. This kind of interactions always make it a lot of fun for me plus I get to learn a ton. That’s how I joined Cornelius Schumacher to work on Yes Ship It! This is a project he had already started on his own so we were not doing everything from scratch.

    The approach of yes_ship_it is different from the typical release script. It doesn’t define a series of steps which are executed to make a release. It defines a sequence of assertions about the release, which then are checked and enforced.

    The first thing we decided to do together was a Rails App which allows you to track successful software releases. Since it was going to be 100% related to Yes Ship It! we decided to call it Yes It Shipped!. Let me show you how trivial it is to add it to a project like the formstack-api gem.

    1. Install the yes_ship_it gem
    $ gem install yes_ship_it
    
    1. Add a yes_ship_it.conf file
    $ yes_ship_it init
    
    1. Release!
    $ yes_ship_it
    

    By default yes_ship_it will check if:

    • you are in the right release branch (by default master) and the code was pushed.
    • the working directory is not missing to commit anything.
    • the version was update
    • the changelog was updated
    • a tag was added and published
    • a new version of the gem was built and published

    The aim is to make it as generic as possible so you can adapt it to anyproject you have. For starters you can remmove any check in the process and soon enough you will be able to add checks of your own.

    What I like the most about it is that I can run yes_ship_it at any time. I don’t need to remember or make sure what was the last step I did because that’s exactly what it will do for me.

    What do you think? Leave your comments below and remember to release early and release often!