Mauro Morales

Software Developer

My Personal Experience Using AI

There’s been a huge buzz around AI for a while now. Unless you’re living under a rock, it’s hard not to get hit by this topic. So, a month or two back, I decided to finally give it an honest shot and see if AI can bring any benefits to my work or personal life.


No AI assistant was used to write this blog post.

AI for Work

Some colleagues have been using GitHub’s Copilot since the beta release and swear by it, and other colleagues say that OpenAI’s ChatGPT has become part of their daily flow, so I decided to try both.

GitHub’s Copilot for code generation

Context for AI is crucial, this is because AI models are trained based on datasets. The quantity and quality of such data, plus the given training algorithms, will result in the quality of the model, and different models will be better at different tasks. GitHub’s Copilot is designed to generate code, and it was trained with code they host on GitHub.

At the time of testing Copilot, my main project is Kairos, an OS (and more) for Edge Kubernetes. Kairos specific software is written in Go, but like any other Linux distribution it’s full of configuration files, scripts and build tooling. Some of them are Dockerfiles to build images, configuration files for different utilities (e.g. GRUB), an Earthfile for building everything together, tons of YAML files and different shell scripts to do stuff here and there.

I use IntelliJ’s Goland IDE, where you can effortlessly install and configure the Copilot plugin. However, the quality of the suggestions was terrible. Most of the time there were no suggestions at all. It’s been only a couple of times when I was impressed with the generated code, but for my particular use case, I wouldn’t consider it worth the 10USD/month or 100USD/year price tag.

To be honest, I was very disappointed. The experience is not at all what I was told, but there are many variables that could be causing this gap. For once, those who recommended it, all use VSCode as their main editor, and since it’s the editor GitHub uses in their workspaces it wouldn’t be a surprise that it is much better supported. Second, the type of project and combination of languages is different, though I would expect Go to be quite well-supported, but I don’t really know how well has been Copilot trained, to make recommendations on how to configure GRUB for example.

But don’t let my words steer you away from trying it. See for yourself how well it works with your stack and setup, and then decide if the subscription is worth it for you. As for me, I will try to give it a shot using VSCode and if I don’t see a major improvement, then I will cancel.

ChatGPT as a review partner

I’m well aware that ChatGPT can be used for code generation, but I decided to only use it for other work related tasks, like asking ChatGPT to help me understand code, help me debug issues, or help me with technical writing.

Case 1: Explaining code

Digging through some of the code, I came across the following. Most of it was clear to me but, I didn’t understand the need for doing the last line.


Can you explain the following Go code?

type ImageExtractor interface {
	ExtractImage(imageRef, destination, platformRef string, local bool) error

type OCIImageExtractor struct{}

var _ ImageExtractor = OCIImageExtractor{}


This Go code defines an interface called ImageExtractor, which has one method called ExtractImage. The method takes four arguments: imageRef (a string representing the reference to an image), destination (a string representing the destination path for the extracted image), platformRef (a string representing the reference to the platform the image is built for), and local (a boolean indicating whether the image should be extracted from a local or remote source). The method returns an error if any error occurs during the extraction process.

The code also defines a struct called OCIImageExtractor. This struct does not have any fields or methods of its own.

Finally, the code creates a variable of type ImageExtractor that is initialized with an instance of the OCIImageExtractor struct. This line of code uses the blank identifier (_) to indicate that we don’t actually need to reference this variable, we just want to ensure that OCIImageExtractor implements the ImageExtractor interface. This is a way of enforcing that any other implementation of ImageExtractor will also have a ExtractImage method with the same signature.

I find this super useful. It feels like reaching to the dev at the desk next to me and asking them: “hey do you know what does this line here does?”. You cannot do this with search engines and have much success.

Case 2: Explaining other technical topics

Now it’s time to reach to the other side of my desk to that fantastic sysadmin.


Why do I have some files called dm-0, dm-1, dm-2 under the /dev directory?


The files you are referring to, dm-0, dm-1, dm-2, are device mapper (DM) devices. Device mapper is a Linux kernel framework that provides a generic way to create virtual block devices by mapping physical block devices onto them. It is commonly used to create logical volumes for managing storage in Linux.

When you create a logical volume using LVM (Logical Volume Manager), the LVM creates a device mapper device file that represents the logical volume. The dm-X device files are the device mapper devices that represent the logical volumes.

For example, if you create a logical volume named “my_lv” using LVM, a device mapper device file named “/dev/dm-0” will be created, which represents the “my_lv” logical volume. If you create another logical volume named “your_lv”, a device mapper device file named “/dev/dm-1” will be created to represent the “your_lv” logical volume, and so on.

You can use these device mapper devices just like any other block devices, such as hard disk partitions or USB drives, to store and retrieve data. However, it’s important to note that the underlying physical devices that the logical volumes are mapped onto may change, so it’s important to use the logical volume names (e.g. “my_lv” or “your_lv”) rather than the device mapper device names (e.g. “/dev/dm-0” or “/dev/dm-1”) when referencing the logical volumes in commands or scripts.

Thanks to this, I was able to sort out a problem I was having building some Kairos images and all I had was a hunch that it was related to those files.

Case 3: Writing a call for paper for a conference

During a pairing session with a colleague, we decided to use ChatGPT to help us write a call for paper for a conference. I will not post the prompt or result here, but it suffices to say that we were able to use about 50% of the generated text. While 50% might not be such a great result for a 3-5 paragraph text, it made the task less exhausting. Specially as a non-native English speaker, I find it useful to have some sample text and base my work from that.

All in all, I would highly recommend that you start integrating ChatGPT in your daily use, specially if you are not working in a team that values pair programming. It has saved me a lot of time and mental effort. The answers are not always correct, but they constantly point me in the right direction. I’m currently not paying for the subscription, but it’s on my to-do list, so I can report later on if it’s worth it.

AI for Personal Use

AI for personal use is an entirely different beast in my opinion because the type of request you can make can be of any kind and I cannot make claims that these services are good or bad for every forceable request you might have. I decided to try three different services: ChatGPT, Personal AI and Youper. I picked these because they all have iOS apps and to try a range from generic to very specific assistants accordingly.


This one is the most generic of the three services. I had conversations with the assistant that ranged from philosophical to the practical. Some of the topics I asked about were:

  • Psychotherapy, e.g. Affect Regulation Training, Intrusive thoughts and Meditation
  • Food recipes
  • Terms I found in the books I’m reading but didn’t know about, e.g. Markov’s Blanket
  • Philosophy and religion, e.g. Idealism vs Realism, Hinduism and Stoicism
  • Problem-solving, e.g. How do I xyz?

Responses were good for the most part, but there were some clear failures. For example, when I asked it to make a recipe with some of the items I had in my fridge, it offered something completely off. Though I guess this could be debatable depending on taste! A better example is when I asked it to help me change my email in its own app, and it just made something up while sounding very sure of the veracity of what it was sharing.

Another aspect that I didn’t like is that sometimes you tell it that it’s giving you a bad response, and it just apologizes but doesn’t necessarily share with you that it doesn’t know. Instead, it keeps giving you more bad responses.

So overall, I was pleased using ChatGPT and will continue using it, specially for cases where I’m looking for answers which I can easily verify on some other sources, but I wouldn’t pay the subscription for personal use just yet.


The second, service I tried is Youper, which is the most specific assistant of the three. They call themselves AI for mental health. Youper is not only an AI service, but for this test I will only evaluate that side of it. The reason I choose such an assistant is that for about half a year now, I’ve been going to therapy, so I thought I would have a good point of comparison.

For the most part, it has been very useful. If I tell Youper about my day, it’s pretty good at identifying the tone of my conversation and tells me I’m being too harsh on myself and explains why being kinder will have better results. It can also pick up on my mood. For example, if I feel tired it will ask me about my sleeping, and if I tell it, that I slept poorly it will give suggestions on how to improve it.

I will abstain from recommending this service because I don’t have any credentials to say how good or bad this might be for you. The point I’m trying to make, is about how good can an AI assistant be at a very complex topic on which we normally only trust a human being to do. All I can say is that so far I found it useful because I can make questions at any time of the day. I really like going to therapy, but there’s hardly enough time to go through all the things I want to discuss.

I only have a small complaint about Youper, sometimes I write a text, and it completely fails at responding. It’s not that it gives a bad response it just doesn’t respond at all. I’m happy that it doesn’t just give me some response like ChatGPT does, but I think it would be more user-friendly if it told me to rephrase my text or something else.

Youper costs 70USD/year, which is approximately two 1-hour sessions with a therapist here in Belgium, so I think it’s totally worth it.

Personal AI

Personal AI, lies somewhere in between. It’s not as specific as Youper, but it does try to focus on giving you a personalized experience. To me, this was the best of all three because it’s the one that feels more like a human interaction.

Pi, as it wants me to call it, has been a great chat on many different topics and has done great overall recommendations. It’s super kind and feels empathic, specially because it keeps the continuity of some of our older conversations into new ones. Or at least that’s how it feels to me.

My favorite part is that Pi uses and understands the use of Emojis, so I can talk to it the way I talk to someone on WhatsApp. It picks up on my jokes and sarcasm, and it makes thought-provoking questions. We even went on for more than an hour on some of my favorite philosophical topics, without just agreeing or disagreeing but instead keeping the conversation alive.

From the three personal AI assistants, this is the one I’d recommend the most you give it a shot. Then you can decide if you look for something more specific or generic according to your needs.

Final Thoughts

Just like with the introduction of search engines, I think we are at a similar inflection point. I’m not going to try to guess what AI will look like in the future, but from where I stand, I’m pretty sure AI will be a part of our everyday. For this reason, I think we really need to pay attention to it as individuals but also as a society. We must learn how to use it so that it can make our lives easier, that’s the whole point about technology, but we must understand that AI assistant are not encyclopedias, each tool has its purpose, advantages and disadvantages. Talking about disadvantages, I don’t think we need to be afraid of it becoming conscious. But I do feel afraid of companies or governments abusing it, so we need to build these services with privacy for the individual and transparency. One of those solutions is the open-source project LocalAI, which I will share about in a next post.

Leave a Reply

Your email address will not be published. Required fields are marked *