Pixel 6

aws sysops icon My pixel 4A (5G) has lasted me around 11 months. One crash too many, and the screen became green on one part, rainbow-ish on the other, useless on its whole. I sent the mobile for reparation to Google. Response: reparation not possible, a refurbished phone is awaiting for 329 EUR. Considering that the phone costed me 399 EUR, and that I got a company discount for new phones, but none for repairs, I decided to purchase a new mobile.

In the meantime, I had to resuscitate my old Pixel 2. Battery issues notwithstanding, the phone is in really great condition. It has fallen many many times, yet not lasting crashes. The pixel 2 is smaller, with a Gorilla Glass 5 front, while the pixel 4a is only Gorilla Glass 3. I found a good online offer, the Samsung S20FE for 379 EUR, but alas, I am not very fond of my own experiences with Samsung phones, and it has just a Gorilla Glass 3 front screen, with upon my own history with the Pixel 4a fall survival rate was discouraging, at best.

So, I added 200 EUR to my budget and got a Pixel 6. Glorious Gorilla Glass Victus, a lot of great reviews, but subject to the underwhelming Google support Service.. I hope this phone will live up to his expectations, I do not see myself getting into the alternative, iPhone territory...

However, this was somehow irrational: pixel 2 is working fine, but just out of support, plus really bad battery life. Pixel 4a destroyed after a few falls -and none really outstanding-. My brother bought a Pixel 4a after my recommendation, and he got a bad one, requiring a replacement. Which is ok, if not because Google support was really bad and he had to resort to consumer protection to get his value back. I guess I am just too invested in Android and I consider Google to be the only acceptable provider, which seems anyway a disputable call...

I have now installed the basics, usual Google applications, and spent quite a long time installing the latest updates. I decided not to copy or transfer data from my previous installation, and start anew. So I went to the Play Store website, My Apps, and started installing the chosen ones... Not an easy choice, My Apps in Google Play shows all the applications I have ever installed in any of my Android smartphones or tablets. It is possible to select a specific category, out of 33 categories, excluding games -which has, on its own, 17 sub categories-. I would have liked to just choose the applications that existed on my previous Pixel 4a, as those were the applications I was using on a daily basis. At then end, I just installed:

  • Amazon
  • Authy
  • Brave
  • Feedly
  • Firefox
  • Google Authenticator
  • Google Find Device
  • Google Fit
  • Google Lens
  • Idealo shopping
  • Kindle
  • Microsoft Authenticator
  • Microsoft Teams
  • NetGuard
  • Opera
  • Signal
  • Skype
  • Slack
  • Solid explorer
  • Tik Tok
  • Uber
  • Vivaldi browser
  • Yahoo email
  • Zoho email
  • Zoom

Comparing AWS and GCP Certification

aws sysops icon After my previous blog on the AWS certification, I would like to compare it with GCP, both on difficulty and overall interest and approach.

In my view, AWS exams are very AWS-centric, you can mostly discard any exam answer that includes non AWS technology or cumbersome processes, like writing custom scripts, etc: allegedly, AWS has already developed some automation processes for you.

Google exams are much more open, more focused on software architecture, and choosing a non Google technology on any exam question is often the answer. But Google exams are less demanding, with more time to answer each question-, and overall I find them easier, as long as you have an architecture background.

There are not as many services in GCP as in AWS, but there are also much fewer courses online with good quality. As an additional remark, I find GCP much more consistent in their services implementation. In AWS you can understand the background of the processes and still wonder why some specific processes work the way they do, and the answer is probably that they haven't got the time yet to implement it in a different manner.

Perhaps more importantly: how interesting is it to get the AWS or GCP certification? Very, very much. CV-wise is definitely a nice to have, and a good discriminator. But better than this, it is a good way to expose yourself to a lot of technologies that you probably don't touch on a daily basis.

Said this, I find more interesting, globally, the Google certification than the AWS one, in that it focuses much more on plain Architecture, site reliability, and technologies that you can use inside or outside GCP. The AWS professional architecture certification focuses in many aspects very specific to AWS, like AWS organizations, hybrid scenarios, migrations to AWS, that have no scope on your architecturing work outside AWS.

AWS Certification

aws sysops icon I completed yesterday my AWS re-certification, the three associate and the two professional levels, so I wanted to write some of my observations on the process.

  • In 2016 I got the associate Architect certification. I did it reading one book: Host Your Web Site In The Cloud, By Jeff Barr (978-0980576832). This was a book already outdated -from 2010-, so I had quite a lot of hands-on and reading the AWS documentation. Plenty easier than now, of course.
  • Although my certification expired in 2018, I had one additional year to meet the recertification deadline, and I effectively passed it in 2019. The certifications expire now in three years instead of two, but somehow I had expected to have that additional year for the recertification deadline, which seems not to be anymore the case. As a result, I have had a few stressful months to comply with the deadlines (bad planning on my side).
  • In 2019 I purchased three courses in Udemy, one for each associate exam, from Ryan Kroonenburg (cloudguru). At around 10$ each, they have had tremendous value, as they were mostly my only source to pass the certifications in 2019, and again in 2022. However, they are now a bit outdated: perhaps not much, as they added a few videos, but considering that you need to get a 720 mark in the exam, any missing concept is a problem.
  • In 2019, I passed the professional exams by going through the AWS whitepapers, after passing the associate exams. Lots of reading, lots of learning, lots of time spent on the process, which I had at the time. In 2022 I have found a new course in Udemy for the Architect exam, by Stephane Maarek, that is really good. I have used another course from him for the DevOps exam, which is arguably better having more hands-on experience, but I definitely preferred the only-slides approach of the Architect course.
  • I have required a bit less than seven weeks to study and pass the five certifications. Considering working and spare time, this amounts to around four to five days for each certification, so it implies quite a lot of dedication. And I know that is quite a short time, but my professional activity involves both architecture and AWS projects management, so I was not starting from scratch.
  • The courses above focus too much on the exams, and the kind of questions you can face. As a result, they can enter into detail into some areas just for the benefit of some specific exam question. They lack a focus on explaining the background of the issue, which is what you need, not only to master AWS, but also to pass the examination. If you understand why or how AWS implements Cloudwatch subscription filters, you should be able to answer immediately that a subscription filter cannot export data to S3 directly, but that it needs something like Kinesis Firehose to do so. Why do some actions handle only SNS notifications while others allow lambda integrations? If you focus on AWS this way, you will understand it much better, and have many more options to pass the exam.
  • The associate certifications have a different scope than the professional ones, it is not just about difficulty. They require a more hands-on approach, a more detailed view of specific concepts, including performance, metrics, etc. However, passing the professional certification automatically renews the associates ones below: not sure why, as, again, the scope is different.
  • Studying for the associate certifications facilitates studying afterwards for the professional ones. But studying the professional ones, much broader in scope, would make it much easier to pass the associate ones. I would suggest trying the associate exams first, and passing them or not, go then for the professional ones.
  • The DevOps professional exam was the most difficult exam for me: from the very first question I was walking in quicksands. I was definitely far less prepared, but the questions were also much trickier. It was the only exam that I finished with just a few minutes left, and I was really careful managing my time.
  • The exams are not easy. The professional ones require a lot of knowledge. Plus, you have in average 2m 40 seconds to answer each question, and many questions are long enough to require that time just to understand the problem. Normally, for one-answer questions, there are only two answers out of four that make sense, and thinking long enough gives you the answer -if you have the time, that is-. For the Devops, the final two acceptable answers for each question still make plenty of sense, you need to know the specific AWS implementation to get the answer right.

AWS SysOps certification

aws sysops icon I am currently doing the re-certification for my AWS certificates: I hold the three associate certifications (Solutions Architect, SysOps Administrator, Developer) and the two professional ones (Solutions Architect, DevOps Engineer).

I have renewed this month the three associates, and I am not sure if I will have the time to renew the two professional ones before expiration, in 4 weeks.

The experience so far is that the exams are getting more complicated, definitely no more simple questions like which service would better fit a given functionality, and more ambiguous scenarios. In the case of the SysOps, there were now, in addition to the quizz questions, three exam labs, with the consequence that the result was not available anymore immediately after the exam, unfortunately.

I had 50 questions in my test; I have seen other people mentioning 53 and 55, so I am not sure if there is some randomness in the number of questions, or just the normal evolution of the test. I do usual administration in AWS, but the three tasks requested during the exam were definitely outside my expertise. Moreover, the usual help support is missing, as documented, so you need to perform those tasks quite blindly. Still, the console design in AWS follows good practices and most tasks can be performed quite intuitively; I think I passed them without too many issues, and the final report was fine on my performance during the labs. As a note, copying and pasting during the labs is definitely a headache and something that should be improved for these labs.

After the exam there is just a note that the results will be received in 5 business days. The official documentation mentions that this CAN be the case: I imagine that passing perfectly the 50 quiz questions could already provide a 790 score, enough to pass the test even with the worst results during the lab questions. I received the results, like for all my exams, on the day after the examination, at 21:01 in the evening.


obsidian icon I have been using a basic editor to keep all my notes, and the result is definitely sub-optimal.

Evernote had worked quite fine in the past, and I do not recall well why I stopped using it: probably the lack of a linux client, or the web interface letting me down too many times.

I have started now using Obsidian and it looks pretty nice: it stores the notes in markdown, so it is basically plain text. It is less visually appealing than Evernote, but the linking and navigation between notes is clearly an advantage.

This is my current obsidian setup, including my way to have multiple devices sharing files -using Dropbox- plus version control -git-.

CodeArtifact and profiles

codeartifact+maven iconI have just submitted a new version of the CodeArtifact+Maven plugin, with additional functionality to support AWS profiles. It now parses the AWS configuration files to find any defined profiles, which are then used to request the authorization token.

The plugin's window is getting bigger and bigger, with more fields to fill, although the latest added files are handled as comboboxes already pre-filled, so the complexity should not increase:

OpenAPI for AWS - python

open4aws iconThe openapi4aws utility had an initial implementation in python, and I developed the java / maven version to have a clean integration in our build processes, which uses maven.

After integrating it in our maven poms, we had still the need to update the resulting API gateway specifications on the fly, to reflect changes in the used k8s endpoints. So I decided to modify the original python implementation to reflect the changes introduced in the java version.

The result is a single python file with under 250 lines of code, and two thirds of the code is used just to handle the configuration parameters. I had fun migrating the data structured from Java -implemented as classes- to Python, where I used a mix of classes, dictionaries and data classes.

This version is available as a python wheel

Git origins and main/master

cleanup icon I keep source code repositories in github, but not for all my projects. I keep also my own git repository on a VPS, backed up regularly, which hosts my private projects, plus the public ones hosted in github.

To keep a repository synchronized both to Github and to my private Git repository, I usually create first my private git repository, then create a repository in Github with the same name. Afterwards, I need to add github as a remote repository; for example, for my project openapi4aws-maven-plugin, I would do:

git remote add github git@github.com:coderazzi/openapi4aws-maven-plugin.git
git push -u github main

Now, changes can be pushed to my private repository doing:

git push origin

And to Github doing:

git push github

Github did recently (October 2020) a migration on the default branch, from master to main, which means that, at least for any new repositories, I would better follow the same approach to ensure that I can push changes to github and my own private git server.

In my private repository, I can rename my default created branch from master to main doing:

git branch -m master main
git push -u origin main
git push origin --delete master

But this last command will likely fail, with a message: deletion of the current branch prohibited. To solve it, go to the git server, and access the folder where the git repository is stored, then do:

git symbolic-ref HEAD refs/heads/main

Now, on the local repository, the command git push origin --delete master will work as expected

OpenAPI for AWS

open4aws iconI am working on a project where we define the interfaces of all microservices using Swagger, and then generate the API gateway specification dynamically. AWS allows importing an openapi specification to define an API gateway, and it supports specific AWS directives to define endpoints, and authorizers (security). As this information is dynamic (mainly the endpoints), we do not want to add it to the otherwise static microservices definitions, but add it to the final specification at some later stage.

Here it comes a new utility, openapi4aws that does exactly that: it requires one or more files to update and a definition of the authorizers and all possible endpoints, and overwrites those files to include the full AWS information. This way, it is possible to re-import the specification to a working API gateway without having to manually / procedurally define its integration with the backend implementation

It is available in maven central, and the source is available in Github with an open source MIT license.

Windows: Git pull error on ssh-rsa

ssh iconTrying to update today my AWS CodeCommit repository in my Windows virtual machine suddenly stopped, with the following error:

Unable to negotiate with port 22: no matching host key type found.
Their offer: ssh-rsa
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

The access is done via ssh, so I thought there was an error on my .ssh/config file, or perhaps I had updated my ssh key in AWS and forgotten to download it to this box. After many checks, everything was fine, yet I couldn't git pull.

Stack overflow had the solution, it was needed to change the .ssh/config entry to look like:

Host git-codecommit.*.amazonaws.com
IdentityFile ~/.ssh/aws
User ....
HostKeyAlgorithms +ssh-rsa
PubkeyAcceptedKeyTypes +ssh-rsa

(Adding the last two lines)

sudo docker cp

docker iconLast time (I hope) that the command

sudo docker cp

bites me. To access docker I need sudo access, and the files are copied in the target folder, owned by root. Not only I cannot likely access them, but if any folder within does not have a+w permissions, I cannot remove them either, as my sudo access is only limited.

Thanks to this stackoverflow link, I can use now instead the following python script to just copy from container source to host target:

from subprocess import Popen, PIPE, CalledProcessError
import sys
import tarfile

def main(source, target_folder):
    export_args = ['sudo', 'docker', 'cp', source, '-']
    exporter = Popen(export_args, stdout=PIPE)
    tar_file = tarfile.open(fileobj=exporter.stdout, mode='r|')
    tar_file.extractall(target_folder, members=exclude_root(tar_file))
    if exporter.returncode:
        raise CalledProcessError(exporter.returncode, export_args)

def exclude_root(tarinfos):
    for tarinfo in tarinfos:
        if tarinfo.name != '.':
            tarinfo.mode |= 0o600
            yield tarinfo

main(sys.argv[1], sys.argv[2])

Virtualbox on Macbook Pro

virtualbox iconA client requires that I access my remote Windows machine through Citrix. At home I only have Linux or Mac computers, and whenever I login into Citrix, and from there launch the remote desktop, the keyboard mapping is just broken. Entering simple characters like a semicolon is a matter of trying the key several times: it will display a different character, until eventually the semicolon appears. You develop quite an artistic to use copy and paste...

However, if I use a Windows virtual machine in Linux, the mapping works. Mind you, it implies that I am using a virtual machine to access some cloud bridge that links me to my remote desktop. Beautiful.

But using a Windows virtual machine in the Macbook, using virtualbox, shows abysmal performance. Plus, the screen will appear very tiny, requiring upscaling it (Virtualbox settings, Display, Scale factor). The best solution I have found to overcome this is executing Virtualbox in low resolution:

  1. Open Finder, then go to Applications and select VirtualBox
  2. Right clicks VirtualBox and select Show Package Contents
  3. Select now VirtualBoxVM Under Contents/Resources
  4. Right click VirtualBoxVM and select Get Info
  5. Check the checkbox Open in Low Resolution

Unfortunately, it affects all the virtual machines, and the display looks definitely worse, but the performance becomes really acceptable. I get that using Parallels or Vmware fusion would be a more performant solution, at a price (plus I could not transfer my Windows license).

A detail that still killed me was I needed Windows just to launch the remote desktop in Citrix. But inside Citrix, ALT+TAB would just show me the processes in Windows, not those in Citrix. Likewise, all hotkeys would be captured by the Windows virtual machine, rendering them mostly useless. Citrix to the rescue: open regedit in the Windows virtual machine and set the value Remote in the following two keys:

  • HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Citrix\ICA Client\Engine\Lockdown Profiles\All Regions\Lockdown\Virtual Channels\Keyboard
  • HKEY_CURRENT_USER\SOFTWARE\Citrix\ICAClient\Engine\Lockdown Profiles\All Regions\Lockdown\Virtual Channels\Keyboard\

Cancelling tasks

codeartifact+maven iconWhen I developed my first Intellij Idea plugin here, I did the usual customary checks, and, pressed to share it with my team, I uploaded it to the Idea market place

The logic of the plugin is quite simple: read a file, obtain a token by invoking the aws binary with specified parameters, and then updating the initial file. These tasks are executed normally very quickly, but the modal dialog that shows the progress included a Cancel button, which I failed to handle. My initial thought was that the Idea runtime would just cancel the progress and stop my background thread. Which is definitely wrong: Java had initially a thread.stop method which was quickly deprecated. Although it can be used, it should not be used, and the Idea runtime definitely would be quite poor if it used it.

So Intellij Idea does the obvious: sets a flag if cancel is pressed, and it is the thread's task to check it regularly. Definitely a better solution, although the resulting code is uglier, invoking at almost every step a isCancelled() method.

And once I had to invest some time to have this running, I decided to decorate the plugin: in my rush, the initial version was just a bunch of labels + input fields. The presence of an image make it somehow better (or so I believe):

Backing up Thunderbird

thunderbird iconEach time aI setup a new computer, I manage to have it up an running with my preferred configuration by following very well defined steps, and taking advantage that most of my development is based on git and I just need to clone my personal repositories.

I have added now as well the instructions to replicate a Thunderbird profile, so the same accounts exist directly on the new setup.

Corsair 280X Build

corsair 280x caseLast time I built my own computer was in 2012. That was a quad core i3770 with 32 Gb that has served me well since then. I do very little gaming (just Civ V) and most processing is handled in some remote machine, so I haven't seen the need to upgrade to a newer machine. But some operations start indeed to seem quite slow, and the PC can be noisy enough to be noticed as soon as something requires some CPU power. Adding to this some problems with the video output, I decided to get a new build.

Nothing fancy: a Ryzen 5600x, and favouring small cases, I went for a matx mobo, the Asrock B550M Pro4 on a Corsair 280X box. Back in 2012, LianLi was the case to have, all brushed aluminium, high quality. This time, I was rooting for the LianLi PC-011D mini, but I had already purchased the Corsair case for a friend' build that finally hadn't happened, so I decided to use it for my build.

My previous build used a LianLi PC-V351 case, a small evolution of the PC-V350 that I had already used previously. These are nice cases, but not nice cases to tinker with. Opening it requires definitely a screw driver -6 screws to open any of the side panels-. Reaching the hard drives case could be done without screwdrivers, but fighting the connection of a small fan sitting behind. Any PCI card modification required opening the case totally, taking the motherboard out -all wires out-, and re-building it. Nice case, but nightmarish.

The Corsair 280X is 40% bigger: just a bit wider, less deep, and full 10cm higher: it looks bigger, but just slightly, until you start building the PC and realize how much space you have. And how well is all organized, and how well built is the case. It includes two fans that are totally silent.

I had purchased a Noctua cooler to replace the Wraith Stealth cooler that comes with the Ryzen 5600X, and thought initially on returning it: the default cooler has a distinct noise, but I thought that once closed the case, with its fans, you would not hear it. Then I mounted the Noctua NH-L12S, and I could not really know when the system was on or off, even when the case was still open! Kudos as well to the power supply, the also silent be quiet! Pure Power 11.

The only thing that bummers me about the build is a detail on the motherboard: the second M2 slot does not have all the lanes it should, so any PCI3 SSD you place there will run at lower speeds. I bought a cheap NVMe - PCIe adapter for 14 euros, and my measures are:

Average read Average write Access time
PCIe4 M2.1 slot 3.5 Gb/s 615.3 Mb/s 0.02 ms
PCIe3 M2.2 slot 1.6 Gb/s 615.3 Mb/s 0.02 ms
PCIe adapter 2.9 Gb/s 605.8 Mb/s 0.02 ms

So, same access time and average write, but definitely better using the additional adapter than the M2.2 slot. Which is therefore useless.

The only doubt I have now is that the case is beautiful, easy to serve, but mostly empty. How better a mini-itx build could have been...

CodeArtifact + Maven Idea plugin

codeartifact+maven iconIn a new project, we needed to publish artifacts to a repository in AWS, using CodeArtifact. Instructions to do so are very clear, but they are a poor fit when using an integrated environment as Intellij Idea.

Basically, you would need to update an environment variable with an authorization token obtained by invoking a aws cli command. After setting the environment, you would need to launch the IDE -from inside that environment-. And aChs the token needs to be refreshed, it is needed to quit the IDE and repeat the process every 12 hours.

The solution was easy: instead of referring in the maven settings file to an environment variable, include there directly the received AWS token. And, to automate the process, better than using an independent script, why not having a Intellij Idea plugin that implements exactly this?

This plugin is defined here, already published in the public Idea plugins repository, and available in Github

TableFilter v5.5.3

table filter iconNew release for this Java library, with a small patch covering a rare null pointer exception.

The source code and the issues management is now hosted in Github, and of course, the code has moved from Mercurial to Git. It is a funny coincidence: I started using SVN for the version control, and moved to Mercurial the 5th May 2010, using Google Code to host the repository. Exactly five years later, the 5th May 2015, I had to move from Google Code to Bitbucket, and almost exactly other 5 years later, the 7th May 2020, I have completed the migration from Bitbucket to Github. Which means ten years and two days of good Mercurial support. Shame that Bitbucket is kicking it out...

Recovering Touch Id on Macbook pro

macbook pro iconIn Summer 2019, my daughter gave me a badly closed bottle of Fanta, with I placed in my laptop bag, together with my Macbook Pro. A short while later, I learnt two things: that my bag was quite watertight and that the macbook was a well built machine that had survived unscathed the Fanta puddle experience.

This happened during a flight to some holidays, where I used little or less the laptop, but eventually I realized that my fingerprint was not recognized anymore. Somehow I linked both experiences together, assuming that the liquid has affected / broken the T1 chip that handles the Touch Id. However, this seems a faulty theory, as T1 is used for other things -like driving the Touch bar' screen, which still worked fine.

I tried the possible options to get it working again. I could not remove existing fingerprints, resetting the NVRAM helped nothing, and a problem reported by other users -removing Symantec Endpoint protection- was definitely not my problem.

The only unproven solution was reinstalling MacOs. I had bought my laptop with Sierra installed, I had dismissed High Sierra and installed Mojave at some point, but didn't see any benefit on installing Catalina. Now, I was 30 months late, and Big Sur was calling, so I decided to go the hard way and install it from scratch, as a last try to get Touch Id working again.

And it did it. I am happy to have Touch Id working again, but dismayed to know that it can fail again -Fanta likely notwithstanding, and there is no obvious way to get it working again, except for a full re-installation.

Setting up Ubuntu, 20/04 edition

Ubuntu configurationFor the LTS editions of Ubuntu, I prefer to start with a new slate, copying my SSD disk to some external backup, and performing a destructive installation, erasing the whole SSD. And as this means having to reconfigure Ubuntu completely, I keep my record of all the steps I take

I have updated my Ubuntu setup to list these steps: how to configure workspaces, how to define my shortcuts, how to make Guake start at runtime, etc.

New optmatch version, first non beta

pythonVersion 1.0.0 of optmatch is now available on pypi.

After two years without issues, I took advantage of the need to update the documentation to include the new references to Github hosting to update the version to non beta.

Moving from Bitbucket

table filter iconExactly five years ago, almost to the day, in May 2015, I had to move my source repositories from Google Code to Bitbucket.

At that moment, Google Code recommended Github as hosting replacement, but I preferred to keep using Mercurial. Five years later, all my professional experience relates to Git (with some usage of SVN and even still CVS), and I can definitely understand the shift on focus in Bitbucket, deprecating any Mercurial usage.

However, Bitbucket's total lack of support on how to migrate existing projects to Git is definitely a regrettable attitude after these years of great support. There are good tools to support the migration from Mercurial to Git -I used myself fast-export, but Bitbucket does not support the conversion of a repository from Mercurial to Git, even if the upload of the converted files were a manual process. And deleting the repository and creating a new one would mean the loss of the project's issues.

I decided to move to Github, and the move was helped with a tool to migrate the issues directly from BitBucket to Git. I tried using this other tool at first, but it gave me too many problems. In any case, they are proof of the issues that many people are having with the demise of Mercurial support at Bitbucket...

Farewell, BitBucket.

TableFilter v5.5.2

table filter iconNew release for this Java library, with no functionality changes at all, just required to update the documentation and links after the move from Bitbucket

The source code and the issues management is now hosted in Github, and of course, the code has moved from Mercurial to Git. It is a funny coincidence: I started using SVN for the version control, and moved to Mercurial the 5th May 2010, using Google Code to host the repository. Exactly five years later, the 5th May 2015, I had to move from Google Code to Bitbucket, and almost exactly other 5 years later, the 7th May 2020, I have completed the migration from Bitbucket to Github. Which means ten years and two days of good Mercurial support. Shame that Bitbucket is kicking it out...

Internal GPU

Cube computer upgrade The Intel HD Graphics 4000 included in my aging i3770 CPU cannot handle the 3840x1600 resolution of my new monitor, so I had to shop for a new GPU card. I believe I should I have stayed with my initial choice, a passive GT1030, but at the end I choose the max power than my 400 watts PSU could feed: a GTX 1650 super.

I had expected a easy setup, disabling the internal GPU and initialize only the PCI express GPU, but my motherboard was crashing continuously. At the end, the only BIOS settings allowing me to boot was having the internal CPU enabled and setting the GPU initialization to Auto or IFGX (internal).

Using Ubuntu 20.04, no configuration is needed to use simultaneously the internal GPU and the Nvidia card, all worked flawless immediately when attaching a monitor to each GPU.

Spoiled geek

monitor size comparisonSpoiled geek: geek who orders a 38" monitors and, on first impression, thinks: is not THAT big.
Funny part, I wrote in 2011 on the same impression I had with a 30" monitor, the Dell U3011,

But, in fact, this time I think it is the right impression. It is very wide, about 20 cm wider than the 30" monitor, but it is also narrower: 5 cm smaller, but with small bezels, so it just translates to 3 cm less display height. Which I don't miss: it is easier (at least to me) the horizontal neck move than the vertical one.

My original intention was to get a 34 curved wide monitor. Glad I decided this size, and in fact, I am left wondering on the much bigger, but equally priced Dell U4919DW. The last one is much more curved, but it is also (3 cm) narrower, with 30 cm more width than my 38" selection. This made me think that the ergonomics were quite a bit skewed. Plus, I was considering a move from 30" to 34" and then to 38", so a 49" monitor was really one step too long. But perhaps the extra curvature helps with the ergonomics, perhaps up-sizing to 49" would have been the good, bold move...

DD-WRT issues

dd-wrtIn the last 24 months, I have added three entries to this blog. As all procrastination woes go, there is an excuse: I intended to move my static site generation from my own custom solution to Hugo, but I didn't dedicate it time enough and the migration never happened, but on the meantime I would just not write more entries...

My custom blog solution is a C# program I wrote almost 15 years ago, so I fully intend to move to Hugo or else in some future time. It works fine, and pretty fast (a few seconds for this site), but would take me many hours any single modification. I had even also a python solution to synchronize my local folders to my server's location, which at some time in the last 15 years was replaced by a simple rsync script, which, of course, expect some direct connection to the port 22 on my dynamic public IP address. For security reasons, that port is closed on my home router, and I only open it when required.

As complexity increases with time, my previous port opening moved to... multiple ports opening, as I had decided to secure a part of my home network. But trying to access the final router, where I had flashed DD-WRT, I couldn't remember the password, and it was not available in my KeePass files, so after many many logon attempts, I had to reset the firmware. And, for the sake of it, upgrade the router.

But upgrading, or installing the DD-WRT firmware is not simple. If you visit its wiki, you receive a big warning to not use the router database. It offers two alternatives: Kong builds, which additional testing are not available since July 2019, and beta builds, which, by definition, seem a coin toss.

After upgrading, I had still network access, but the Wifi access was gone. SSIDs were broadcast, radios were on, but my phone couldn't see them. Funny point, the DD-WRT Status page offers a gadget to scan the network, but it was asking me to enable the Wireless network -of course, giving no hints on how to do so...

Solution was the usual way: reboot the router, and the 2.4 GHz SSID appeared in my phone. To get also the 5 GHz SSID I had to move the wireless channel from Auto to a specific channel. My router is a Netgear WNDR 3700, its last official Netgear firmware is dated on 2011. I am definitely glad the only issues I had with the DD-WRT firmware were so easy to solve.

TableFilter v5.5.1

table filter iconNew release for this Java library, implementing some requested functionality regarding the automatic hiding of filter popups during table model updates.

It was on the pipeline for two months, but other things kept popping up and delaying this release.

New optmatch version

pythonVersion 0.9.2 of optmatch is now available on pypi.

It solves the issues raised so far, and it cleans up some implementation details

Argument parsing in python

pythonAlmost 10 years ago, I thought about implementing a new solution for arguments parsing in python.

Basically, existing solutions (now and then) instruct a parser on the expected options and flags, and the parser handles the arguments, catching any errors and producing a beautiful help summary automatically. The major issue is the flexibility of the parser to define the correct arguments syntax: for example, whether it is possible to specify incompatible options, flags, etc.: two options --verbose and --quiet can be incompatible, or action scan could require a mandatory --source option. Additionally, if the parser is flexible enough to provide this functionality, it can be quite difficult to program, or to make changes at a later stage.

My idea was to define class methods that could handle each possible combination of arguments, and the parser would extract that information without further effort from the programmer. That is: express what each operation requires, and expect that the parser would handle all the required logic:

class Example(OptionMatcher):

	def handle_common_flag(self, mail_option):

	def handle_compression(self, file, compress_flag=False):

	@optmatcher(flags='verbose', options='mode')
	def handle(self, file, verbose=False, mode='simple', where=None):

It does not only simplifies arguments handling, and expresses clearly the purpose, but adding / removing options or flags or operations is in fact very easy

I implemented this solution shortly after, and then started adding functionality as required. Looking to the history of the project, I invested around 10 months on this projects, although I do not remember anymore the associated effort -the optmatch.py file has just 800 lines of code, plus comments, plus tests and documentation-.

And then I used it in some of my projects, a few people contacted me about it, and it was included in a some other projects, but it was definitely not a success. In fact, when I needed to do arguments parsing on scripts at work, I would normally default to a standard solution argparse

A few weeks ago I was once again in the dilemma of creating a minor personal application, and I started using argparse. Soon I got into the familiar territory of having incompatible options, flags for only specific actions, and to express different actions under the same script. This meant adding code after the parsing to handle all these issues. And, after a while, adding a new option required quite a lot of effort just to handle all properly, so I remembered my own unloved library, and decided to give it a try, again.

And that was it, like falling for some old love :-)

So I have spent a few hours updating the library -it was only supporting camel case notation on parameters, and now it supports the more standard underscores-, and, more importantly, uploading it to the standard Python Package Index (PyPI), so it can now be easily installed as:

pip install optmatch
. Even better, support for python 2 and 3 is now included in a single file.

Bollocks UI

bollocks interfaceI read this interesting article on the (wrong) effect of using flat UI, where users are found to require longer times to grasp the meaning of the UI elements in web pages, delaying therefore their actions.

Bollocks, but not so much as trying the latest Nautilus interface, the default file navigator in Ubuntu 17.04. Creating a new folder requires double clicking on the folder contents and selecting the firt option (New folder). But note that selecting the parent folder and right clicking shows a menu where the option to create this new folder does not appear. Furthermore, Nautilus has an application menu, but the option to create a menu is all but missing.

When you right click on the folder content, the first option is indeed 'New Folder', and it displays very handily its shortcut: Shift+Ctrl+N. BUT: what happens if the folder contains too many files/sub folders? There is no way to right click on empty content, and there is no way to create a new folder except by knowing the shortcut.

I guess that one or more developers went just too far on their quest to simplify the interface.

OS Agnostic

macos againYep, after 6 months in Ubuntu land, the Wayland switch -or the associated bugs- dropped me back into the arms of Hackintoshing

Not happy about it, but Wayland is at the moment no go in my configuration; and MacOs installation was really simple, just a couple of hicups. The whole installation process is described here

Switching operative systes is now an almost painless process. I do not rely on the cloud to store my files -just a few ones, which in fact host most of my configuration information. As a result, installing a new operative system normally implies:

  • The OS installation itself -30 minutes for Ubuntu, a few hours for Hackintosh
  • OS Configuration, a 15 minutes process once I have it correcly documented
  • Programs installation, a 60 minutes process, using software centers and command line
  • Copying my common folders, either from the old drive or from a backup, which takes up to 3 hours-.

Ubuntu 17.10

wayland suicideSix months ago, after repeated issues with Hackintosh, I moved to Ubuntu 17.04, and all was well.

Then it came Ubuntu 17.10, with the move from Unity, and the introduction to Wayland, and man, was that a big change. It was supposed to be, like every last Ubuntu releases, a minor upgrade, almost more of an update than a real move.


But. That workspaces do not work anymore as they used to do with Unity, that shortkeys seem to only work randomly, that Steam seems much slower and it seems to exhibit a gap between the cursor pointer and the exact pointed location are for me just inconveniences; but that whenever the monitor goes to sleep, Wayland performs a swift Harakiri, with all X programs getting killed, is a major showstopper. I cannot justify all the lost time.

What is special on my setup to produce such an outlier crash? No idea, I use a embedded Intel HD 4000 solution with DisplayPort connection; I have tried disabling DCI on the monitor, without success. For me, Ubuntu 17.10 is, on this machine, no go.

I have filled a bug for this problem , but no solution or activity so far...

I could stay with 17.04, but that means a very short security period with upgrades. I could try installing 16.04, a LTS solution, or, as I have done, let the upgrade Ubuntu 17.10 in place, hoping for updates to solve my issue, and have a new Hackintosh try, on a separate drive. Let's see how the fight Ubuntu Wayland vs Hackinstosh works for me in the close future...

Note: on 28th November, I have switched back to Ubuntu, upgraded, dist-upgraded and rebooted. But Wayland persists on its suicidal ways....

DisplayPort on Ubuntu

monitorThere is an ongoing issue with using monitors in Ubuntu connected via DisplayPort: monitor shows black screen and "no input signal" after turning the monitor off and on manually

Turning off a DisplayPort-connected monitor is treated the same as disconnecting it, and somehow X11 does not recover from this. I have seen this error related to Nvidia and Radeon cards, but in my case I have a Intel HD4000, and the error is exactly the same.

And it happens with just all kind of monitors, including my Dell U3011. A proposed solution is to disable DDC/CI on the monitor itself, but this didn't solve anything for me.

A solution I have found is to press Ctrl+Alt+F6 (or +F5, etc), to open a TTY console, and then pressing Ctrl+Alt+F7 to get back to X11. But it works sporadically, sometimes having to press these keys several times, or creating new TTY consoles, like pressing Ctrl+Alt+F2 to create a TTY2 if the 6 had been already created before.

Other provided solution is to ssh from another machine and run:

env DISPLAY=:0 xset dpms force off
env DISPLAY=:0 xset dpms force on
In this case, the best option is to create a shell script /usr/local/bin/display-port-wake-up.sh with this content, open Settings, Keyboard, Shortcuts, and create a custom shortcut (in my case, Ctrl + Alt + W). This solution works always, but there is a catch: the shortcut is only available when the user is logged in, so if the system is asking the user password to unlock the screen. the shortcut will not work. In this case, it is needed to enter blindly the password, press Enter, and then press the shortcut.

Moving Ubuntu to separate disk

driveAfter a few weeks with my new Ubuntu installation, I was able to do all my usual tasks without missing OsX. So far, I have found only two issues: display port monitor not awaking some times, and some crashes in Steam.

But the main issue has appeared when trying to it setup for Android development... and running out of space. So, the original hackintosh disk, which was in standby, had to go, and the idea was to clone my existing installation to the other disk.

So I reformatted the hackintosh drive, just to find that I had removed the EFI partition that booted the Ubuntu system... My solution was to launch the Ubuntu installer, and install Ubuntu again on the new drive (the previous Hackintosh drive), taking care of having one EFI partition, plus a big ext4 one. Once I had Ubuntu installed, I launched again the installation system, mounted the old partition under /media/old, and the new one under /media/new, and then copied all the important files:

sudo cp -R --preserve=all bin/ etc/ home/ lib/ lib64/
            opt/ root/ run/ sbin/ usr/ var/ /media/new/

Then edit the file /etc/fstab to change the UUID of the disks, and presto!

Setting up Ubuntu

Ubuntu configurationSetting up a Ubuntu computer seems to be my fate of late. And each time I do it, I need to do the same Google searches: how to configure workspaces, how to define my shortcuts, how to make Guake start at runtime, etc

So I have collected all the steps I take to configure a Ubuntu machine, from a generic point of view, not describing all the applications I finally use, but definitely including all details to configure Ubuntu as I like it. It seems weird to show these steps for a Ubuntu 16.10 installation on the same day that Ubuntu 17.04 is published, but I will definitely comment on any changes.

Ciao, Hackintosh

ciao hackintoshI have been setting up my computers as Hackintoshes since 2009; currently I have a Mac laptop, with the latest Sierra installation, and a Dell laptop, running Ubuntu 16.10, plus two desktop computers, one running Windows Vista and Snow Leopard (yep, both run still perfectly fine), and the other with MacOs Sierra. At work I use exclusively Linux, and I had been wondering for a long time on my reasons to keep my Hackintoshes at home.

As of last weekend, this question has been answered, fare well, Hackintosh. I will keep the old desktop running Snow Leopard, and my Apple laptop running MacOs, but I definitely see no point on not using Ubuntu as my first OS choice. My choice is rather simple to do, as none of the programs I use lack a Linux version, with the exception of Evernote, which I can still handle via its Web interface.

Batch editing Google contacts

google contactsThe new Google contacts application is very nice. It looks great, and it offers good functionality, like merging of duplicates. Editing contacts works perfectly, but it can only be done one contact at a time. I was migrating contacts from a non smartphone, and the migration had converted them into an ugly 'Family name; surname' format, such as 'Trumpy; Donald'. And as slick as the user interface is, editing by hand over 300 contacts was a boring perspective.

Automation to rescue: export contacts, process them with a python script, import them again. Do, in Google Contacts, press 'More', then export, then read the popup warning and head to the old Google Contacts application, as the new one seems to be unable to export the contacts. Press again the More button, again on export, and choose Google CSV format.

The script to convert the contacts is as simple as:

 import csv
 with open('google.csv', newline='', encoding='utf16') as f:

 for contact in contacts:
     colon = contact[1].find(';')
     if colon > 0:
         name = "%s %s" % (contact[1][colon+1:], contact[1][0:colon])
         contact[0] = contact[1] = name

 with open('google-out.csv', 'w', newline='') as f:

This reads the exported google.csv file, and creates a new google-out.csv file; in the old Google Contacts application, remove now all contacts, and then initiate the Import process, passing the created file. Easy as pie.

The previous script shows a very basic transformation; the important aspects are: (1) it is needed Python 3 to run this script, as its CSV reader handles properly the unicode format. (2) The input format seems to be UTF-16 (it was definitely on my OsX machine). (3) However, it was reimported as UTF-8 without issues.

Finally! oh, no!

dell xps 13 I have a Macbook Pro 13" early 2011. I upgraded it manually, the memory to 8Gb and the hard disk to a SSD of 128 Gb (and later to other of 256 Gb). Battery is down to about 3 hours and I was planning to buy the revamped Macbook Pro as soon as it would be released. So I was one more of those Macbook Pro fans completely astonished to see what Apple was releasing.

Personally, the touch bar seems a total compromise: missing the touch screen (which I am neither fond of), lacking real useful buttons (wrong, the useful buttons seems to be always on on the touch bar), and looking like the Apple way to say 'Look, we are innovating!'

My macbook is RELIABLE; has come with me to really remote areas in Bangladesh, Philippines, Benin, Eritrea or Mozambique, and it has answered like a pro. The available ethernet connection has saved me more times than I could count. The magsafe connector has very probably avoided a few accidents and when/if it gets even retired, my macbook pro should get its own urn.

What is offering now Apple? Only USB-C (oh, well), no magsafe (sigh!), no ethernet connector (augh!), touch bar (oh, my, my), solded memory and storage ($*&#^$!)? Together with an old CPU (okaysh, but not very okaysh), and the limitation of 16 Gb (yeah, I know, more memory would impact battery time). And then, the butterfly keyboard, which is unpleasant, and seems not that reliable. See that I do not even mention the price; I would not mind paying the Apple tax for a worthy product!

My company provided me with a Dell XPS 13, developer edition, and I was thinking about hackintoshing it. I have 3 hackintoshes -all desktops-, and this was looking like a good way to get a proper Apple laptop without the limitations of the latest Apple models.

But is this the case? The XPS 13 has more connectors, but still lacks the one I consider important (Ethernet), which is just not possible with laptops this thick (but I do not care about their thickness!!). As in the Macbook, the memory is soldered, so you get what you buy, and that is also limited to 16 Gb. In Luxembourg, the only XPS 13 Linux with 16 Gb memory comes with a 512 Gb SDD (256 would have been enough for me, specially when it can be upgraded), and with the touchscreen at great resolution (basic 1920x resolution is enough for me, specially when i t improves battery life,and touchscreen is something that I still need to find useful), it costs, with core i7-7500u processor, 1800 euros, with an ongoing promotion down to 1600 euros. It has approximately the same size as the Macbook Pro, and for me, the same limitations.

Screen-wise, the Dell has higher resolution, and the Apple better brightness -which at this size it means for me a bonus point for the Apple-. The Dell's keyboard is better, and the trackpad works good, but it cannot be compared with the Macbook pro. I like / love linux, but the desktop experience is subpar to MacOS, and the full integration with the hardware means that even when the Macbook battery is smaller, its performance would be normally better that on the Dell.

The Macbook Pro, 13 inches without the silly touch bar costs 1875 euros. This provides a 256 Gb SSD, enough for my taste, but goes up to 2100 euros for the 512 Gb SSD. The core i5 processor (i5-6360U), can be upgraded to (i7-6660U), for 'just' 350 euros, raising the total price to 2445 euros. So, the Dell XPS13 with a newer i7 processor costs 645 euros less, and with the current promotion, 800 euros less, a cool 33% cheaper. I find in fact a better approach opting for the touch bar version, and staying with the core i5 version -the i7 is anyway only dual core-. For 'just' 2165 euros you get 16 Gb memory, 256 Gb SSD and a core i5 a 2.9 GHz, plenty speedier than the non touch-bar one (hint: Apple wants you to get the touch bar)

Personally again, I think that the Apple is all but Pro. I do not think that the Dell is more Pro at all, just cheaper for the same ambitions. If I would go now the Apple way, I think I will try a real 4-core machine, on its 15" envelop. The basic 15 inches gives a 4 core processor, 16 G memory and 256 Gb SSD for 2600 euros, including a discrete graphics card in the mix.

Or, well, I will just give up on these manufacturers, and enjoy my Dell for the time being, perhaps I will hackintosh it for the pleasure of it, and I guess I will still invest some money on Apple stock... and on a new battery for my reliable Macbook 2011 (real) pro.

Ubuntu 16.10 on Dell XPS 13 (9350)

dell xps 13I got a Dell Xps 13 recently, the developer edition that comes with Ubuntu. It is not the latest model (9360), with the Kaby Lake processor, but the previous one (9350) with Skylake. These two versions differ not only on the processor, but also on the Wifi card and the Ubuntu version; while the 9360 model comes with Ubuntu 16.04 installed, Dell does only provide Ubuntu 14.04 for the 9350 model.

Ubuntu 16.04 includes Linux kernel 4.4, with incomplete support for Skylake processors; kernel 4.6 included specific support for Dell XPS 13 systems, and Ubuntu 16.10 comes with kernel 4.8. Installing Ubuntu 16.10 on this laptop is a no-brainer, all works fine directly.

My laptop is the model with touchscreen and high resolution (3200x1800); the fonts appear too little, so it is better to go to settings / displays and choose a 2x scale (or any scale at will).

The other problem that appears with the touchscreen is that Ubuntu installs by default a library implementing braille support. Entering

sudo tcpdump -i lo
shows a lot of traffic on the loopback interface from port 4101. To disable it:
sudo apt-get purge brltty xbrlapi
(which an additional reboot)

Although everything works fine, there are some quirks. For example, the touchscreen works until the machine goes to sleep with the closed lid. To have it working again, it is needed to close-reopen quickly the lid.

TableFilter v5.3.0

table filter iconNew release for this Java library, implementing some requested functionality: the possibility to filter entries that contain some text -initially, the default operator would only display entries starting with a given expression

VirtualBox guest additions on headless

virtualbox Normally, I run virtualbox machines in headless mode, so it is useless installing an OS with full GUI. I favor in this cases a debian installation (minimal, using the netinst CD), and ensuring that PAE/NX is enabled in System/Processor.

It is still helpful to install the virtualbox guest additions to improve the performance, but the usual way -Devices/Install Guest Additions CD Image...- doesn't work. In this case, the best procedure is to download them and perform a manual install. For the current version, 5.0.16:

cd /tmp
wget http://download.virtualbox.org/virtualbox/5.0.16/VBoxGuestAdditions_5.0.16.iso
mkdir iso
sudo -s
apt-get install -y dkms
mount -o loop VBoxGuestAdditions_5.0.16.iso iso
sh iso/VBoxLinuxAdditions.run --nox11
umount iso
rm -Rf iso VBoxGuestAdditions_5.0.16.iso

This will install the extensions, and produce a final warning:

Could not find the X.Org or XFree86 Window System, skipping.

This warning is okay. It is possible to check if the additions are installed by invoking:

lsmod | grep vboxguest


rclone A few years ago, I setup a poor man backup system for a site using rsync and Dropbox. Eventually, the database and associated files required more than 2 Gb, and still following the poor man habits, I had a look at

; unfortunately, Google does not have an official headless linux client.

But there are several unofficial clients. Like gsync, which tries to provide rsync functionality, being still severely limited. I opted better for other client, rclone, which does not only support GDrive, but also Dropbox, Amazon S3, Backblaze, etc. The documentation is very complete, the setup really simple, and its functionality covers all my scenarios.

Installation instructions only cover directly Linux, and for some reason it puts the executable into sbin, which is all but useful -the idea is to be executed, with different credentials, by each user. Finally, I used the following instructions for Linux:

cd /tmp
wget http://downloads.rclone.org/rclone-v1.28-linux-amd64.zip
unzip cd rclone-v1.28-linux-amd64.zip

sudo cp rclone-v1.28-linux-amd64/rclone /usr/local/bin
sudo chown root:root /usr/local/bin/rclone
sudo chmod 755 /usr/local/bin/rclone

#install manpage
sudo mkdir -p /usr/local/share/man/man1
sudo cp rclone-v1.28-linux-amd64/rclone.1 /usr/local/share/man/man1/
sudo mandb
rm -Rf rclone-v1.18-linux-amd64*

And almost the same instructions for OsX:

cd /tmp
wget http://downloads.rclone.org/rclone-v1.28-osx-amd64.zip
unzip rclone-v1.28-osx-amd64.zip

sudo cp rclone-v1.28-osx-amd64/rclone /usr/local/bin
sudo chown root:wheel /usr/local/bin/rclone
sudo chmod 755 /usr/local/bin/rclone

#install manpage
sudo mkdir -p /usr/local/share/man/man1
sudo cp rclone-v1.28-osx-amd64/rclone.1 /usr/local/share/man/man1/
rm -Rf rclone-v1.18-osx-amd64*

Two factor authentication with SSH

google authenticator icon This must be the best way to strengthen the security on your ssh connection for those cases where ssh keys are not available.

Tip copied from this arm-blog

Redmine on Debian

redmine icon Added instructions to install Redmine on Debian, using PostgreSQL and Nginx.

The main problem was in fact setting up correctly email support (and then, trying to scape the SPAM folders in Google).