16th Nov 2022CodeArtifact and MFA
A request I have received repeatedly about the CodeArtifact+Maven plugin is the support for multifactor authentication.
The request was to prompt the user to enter the MFA token, but when I tried setting up a user with MFA, I could not manage to get AWS to prompt me for that token. Alas, the problem is that MFA can be setup for role profiles, or for user profiles, and only in the first case AWS prompts for that MFA token
I have documented this issue separately and upgraded the plugin to include this support. For the moment, only role profiles are supported.
29th October 2022Kubegres K8s operator
I have been using in a test AWS Kubernetes environment, Kubegres, a PostreSql operator, without any issues.
The version I installed was 1.15, using one replica with backups (to EFS) enabled.
The operator does not seem to be in heavy development -or it is considered mostly stable-. There is a new 1.16 version since September 2022, but I haven't tried yet the upgrade.
Alas, I had my first issue recently, when the replica POD failed to start, which resulted in missing backups. All due to the usual error of volume node affinity conflict, caused by a more bizarre error behind.
25th September 2022Virtualbox mount points
Starting a Linux virtual machine in Virtualbox, I got the typical emergency mode error
Welcome to emergency mode! After logging in, type “journalctl -xb” to view system logs, “systemctl reboot” to reboot, “systemctl default” or ^D to try again to boot into default mode.
The error was related to a shared folder I had setup, which was not accessible. The entry in /etc/fstab read:
Infrastructure /mnt/infrastructure vboxsf rw,nosuid,nodev,relatime,uid=1000,gid=1000 0 0
And checking the virtualbox machine settings, I was still creating the mount point:
The problem was that the virtual box application had been updated, as well as its extensions, but the guest additions had not been updated in the virtual machine. As a result, the shared point was not reachable, and because there was no nofail attribute in the mount point, the kernel entered into emergency mode.
To solve the issue:
cd /opt/VBoxGuestAdditions-6.1.26/init sudo ./vboxadd setup sudo reboot
Obviously, update the first line to reflect the current guest additions version (6.1.26 in my case)
29th March 2022Pixel 6
My pixel 4A (5G) has lasted me around 11 months. One crash too many, and the screen became green on one part, rainbow-ish on the other, useless on its whole. I sent the mobile for reparation to Google. Response: reparation not possible, a refurbished phone is awaiting for 329 EUR. Considering that the phone costed me 399 EUR, and that I got a company discount for new phones, but none for repairs, I decided to purchase a new mobile.
In the meantime, I had to resuscitate my old Pixel 2. Battery issues notwithstanding, the phone is in really great condition. It has fallen many many times, yet not lasting crashes. The pixel 2 is smaller, with a Gorilla Glass 5 front, while the pixel 4a is only Gorilla Glass 3. I found a good online offer, the Samsung S20FE for 379 EUR, but alas, I am not very fond of my own experiences with Samsung phones, and it has just a Gorilla Glass 3 front screen, with upon my own history with the Pixel 4a fall survival rate was discouraging, at best.
So, I added 200 EUR to my budget and got a Pixel 6. Glorious Gorilla Glass Victus, a lot of great reviews, but subject to the underwhelming Google support Service.. I hope this phone will live up to his expectations, I do not see myself getting into the alternative, iPhone territory...
However, this was somehow irrational: pixel 2 is working fine, but just out of support, plus really bad battery life. Pixel 4a destroyed after a few falls -and none really outstanding-. My brother bought a Pixel 4a after my recommendation, and he got a bad one, requiring a replacement. Which is ok, if not because Google support was really bad and he had to resort to consumer protection to get his value back. I guess I am just too invested in Android and I consider Google to be the only acceptable provider, which seems anyway a disputable call...
I have now installed the basics, usual Google applications, and spent quite a long time installing the latest updates. I decided not to copy or transfer data from my previous installation, and start anew. So I went to the Play Store website, My Apps, and started installing the chosen ones... Not an easy choice, My Apps in Google Play shows all the applications I have ever installed in any of my Android smartphones or tablets. It is possible to select a specific category, out of 33 categories, excluding games -which has, on its own, 17 sub categories-. I would have liked to just choose the applications that existed on my previous Pixel 4a, as those were the applications I was using on a daily basis. At then end, I just installed:
- Google Authenticator
- Google Find Device
- Google Fit
- Google Lens
- Idealo shopping
- Microsoft Authenticator
- Microsoft Teams
- Solid explorer
- Tik Tok
- Vivaldi browser
- Yahoo email
- Zoho email
17th March 2022Comparing AWS and GCP Certification
After my previous blog on the AWS certification, I would like to compare it with GCP, both on difficulty and overall interest and approach.
In my view, AWS exams are very AWS-centric, you can mostly discard any exam answer that includes non AWS technology or cumbersome processes, like writing custom scripts, etc: allegedly, AWS has already developed some automation processes for you.
Google exams are much more open, more focused on software architecture, and choosing a non Google technology on any exam question is often the answer. But Google exams are less demanding, with more time to answer each question-, and overall I find them easier, as long as you have an architecture background.
There are not as many services in GCP as in AWS, but there are also much fewer courses online with good quality. As an additional remark, I find GCP much more consistent in their services implementation. In AWS you can understand the background of the processes and still wonder why some specific processes work the way they do, and the answer is probably that they haven't got the time yet to implement it in a different manner.
Perhaps more importantly: how interesting is it to get the AWS or GCP certification? Very, very much. CV-wise is definitely a nice to have, and a good discriminator. But better than this, it is a good way to expose yourself to a lot of technologies that you probably don't touch on a daily basis.
Said this, I find more interesting, globally, the Google certification than the AWS one, in that it focuses much more on plain Architecture, site reliability, and technologies that you can use inside or outside GCP. The AWS professional architecture certification focuses in many aspects very specific to AWS, like AWS organizations, hybrid scenarios, migrations to AWS, that have no scope on your architecturing work outside AWS.
17th March 2022AWS Certification
I completed yesterday my AWS re-certification, the three associate and the two professional levels, so I wanted to write some of my observations on the process.
- In 2016 I got the associate Architect certification. I did it reading one book: Host Your Web Site In The Cloud, By Jeff Barr (978-0980576832). This was a book already outdated -from 2010-, so I had quite a lot of hands-on and reading the AWS documentation. Plenty easier than now, of course.
- Although my certification expired in 2018, I had one additional year to meet the recertification deadline, and I effectively passed it in 2019. The certifications expire now in three years instead of two, but somehow I had expected to have that additional year for the recertification deadline, which seems not to be anymore the case. As a result, I have had a few stressful months to comply with the deadlines (bad planning on my side).
- In 2019 I purchased three courses in Udemy, one for each associate exam, from Ryan Kroonenburg (cloudguru). At around 10$ each, they have had tremendous value, as they were mostly my only source to pass the certifications in 2019, and again in 2022. However, they are now a bit outdated: perhaps not much, as they added a few videos, but considering that you need to get a 720 mark in the exam, any missing concept is a problem.
- In 2019, I passed the professional exams by going through the AWS whitepapers, after passing the associate exams. Lots of reading, lots of learning, lots of time spent on the process, which I had at the time. In 2022 I have found a new course in Udemy for the Architect exam, by Stephane Maarek, that is really good. I have used another course from him for the DevOps exam, which is arguably better having more hands-on experience, but I definitely preferred the only-slides approach of the Architect course.
- I have required a bit less than seven weeks to study and pass the five certifications. Considering working and spare time, this amounts to around four to five days for each certification, so it implies quite a lot of dedication. And I know that is quite a short time, but my professional activity involves both architecture and AWS projects management, so I was not starting from scratch.
- The courses above focus too much on the exams, and the kind of questions you can face. As a result, they can enter into detail into some areas just for the benefit of some specific exam question. They lack a focus on explaining the background of the issue, which is what you need, not only to master AWS, but also to pass the examination. If you understand why or how AWS implements Cloudwatch subscription filters, you should be able to answer immediately that a subscription filter cannot export data to S3 directly, but that it needs something like Kinesis Firehose to do so. Why do some actions handle only SNS notifications while others allow lambda integrations? If you focus on AWS this way, you will understand it much better, and have many more options to pass the exam.
- The associate certifications have a different scope than the professional ones, it is not just about difficulty. They require a more hands-on approach, a more detailed view of specific concepts, including performance, metrics, etc. However, passing the professional certification automatically renews the associates ones below: not sure why, as, again, the scope is different.
- Studying for the associate certifications facilitates studying afterwards for the professional ones. But studying the professional ones, much broader in scope, would make it much easier to pass the associate ones. I would suggest trying the associate exams first, and passing them or not, go then for the professional ones.
- The DevOps professional exam was the most difficult exam for me: from the very first question I was walking in quicksands. I was definitely far less prepared, but the questions were also much trickier. It was the only exam that I finished with just a few minutes left, and I was really careful managing my time.
- The exams are not easy. The professional ones require a lot of knowledge. Plus, you have in average 2m 40 seconds to answer each question, and many questions are long enough to require that time just to understand the problem. Normally, for one-answer questions, there are only two answers out of four that make sense, and thinking long enough gives you the answer -if you have the time, that is-. For the Devops, the final two acceptable answers for each question still make plenty of sense, you need to know the specific AWS implementation to get the answer right.
22nd Feb 2022AWS SysOps certification
I am currently doing the re-certification for my AWS certificates: I hold the three associate certifications (Solutions Architect, SysOps Administrator, Developer) and the two professional ones (Solutions Architect, DevOps Engineer).
I have renewed this month the three associates, and I am not sure if I will have the time to renew the two professional ones before expiration, in 4 weeks.
The experience so far is that the exams are getting more complicated, definitely no more simple questions like which service would better fit a given functionality, and more ambiguous scenarios. In the case of the SysOps, there were now, in addition to the quizz questions, three exam labs, with the consequence that the result was not available anymore immediately after the exam, unfortunately.
I had 50 questions in my test; I have seen other people mentioning 53 and 55, so I am not sure if there is some randomness in the number of questions, or just the normal evolution of the test. I do usual administration in AWS, but the three tasks requested during the exam were definitely outside my expertise. Moreover, the usual help support is missing, as documented, so you need to perform those tasks quite blindly. Still, the console design in AWS follows good practices and most tasks can be performed quite intuitively; I think I passed them without too many issues, and the final report was fine on my performance during the labs. As a note, copying and pasting during the labs is definitely a headache and something that should be improved for these labs.
After the exam there is just a note that the results will be received in 5 business days. The official documentation mentions that this CAN be the case: I imagine that passing perfectly the 50 quiz questions could already provide a 790 score, enough to pass the test even with the worst results during the lab questions. I received the results, like for all my exams, on the day after the examination, at 21:01 in the evening.
27th Jan 2022Obsidian
I have been using a basic editor to keep all my notes, and the result is definitely sub-optimal.
Evernote had worked quite fine in the past, and I do not recall well why I stopped using it: probably the lack of a linux client, or the web interface letting me down too many times.
I have started now using Obsidian and it looks pretty nice: it stores the notes in markdown, so it is basically plain text. It is less visually appealing than Evernote, but the linking and navigation between notes is clearly an advantage.
This is my current obsidian setup, including my way to have multiple devices sharing files -using Dropbox- plus version control -git-.
27th Jan 2022CodeArtifact and profiles
I have just submitted a new version of the CodeArtifact+Maven plugin, with additional functionality to support AWS profiles. It now parses the AWS configuration files to find any defined profiles, which are then used to request the authorization token.
The plugin's window is getting bigger and bigger, with more fields to fill, although the latest added files are handled as comboboxes already pre-filled, so the complexity should not increase:
5th Jan 2022OpenAPI for AWS - python
The openapi4aws utility had an initial implementation in python, and I developed the java / maven version to have a clean integration in our build processes, which uses maven.
After integrating it in our maven poms, we had still the need to update the resulting API gateway specifications on the fly, to reflect changes in the used k8s endpoints. So I decided to modify the original python implementation to reflect the changes introduced in the java version.
The result is a single python file with under 250 lines of code, and two thirds of the code is used just to handle the configuration parameters. I had fun migrating the data structured from Java -implemented as classes- to Python, where I used a mix of classes, dictionaries and data classes.
16th Dec 2021Git origins and main/master
I keep source code repositories in github, but not for all my projects. I keep also my own git repository on a VPS, backed up regularly, which hosts my private projects, plus the public ones hosted in github.
To keep a repository synchronized both to Github and to my private Git repository, I usually create first my private git repository, then create a repository in Github with the same name. Afterwards, I need to add github as a remote repository; for example, for my project openapi4aws-maven-plugin, I would do:
git remote add github firstname.lastname@example.org:coderazzi/openapi4aws-maven-plugin.git git push -u github main
Now, changes can be pushed to my private repository doing:
git push origin
And to Github doing:
git push github
Github did recently (October 2020) a migration on the default branch, from master to main, which means that, at least for any new repositories, I would better follow the same approach to ensure that I can push changes to github and my own private git server.
In my private repository, I can rename my default created branch from master to main doing:
git branch -m master main git push -u origin main git push origin --delete master
But this last command will likely fail, with a message: deletion of the current branch prohibited. To solve it, go to the git server, and access the folder where the git repository is stored, then do:
git symbolic-ref HEAD refs/heads/main
Now, on the local repository, the command
git push origin --delete master will work as expected
12th Dec 2021OpenAPI for AWS
I am working on a project where we define the interfaces of all microservices using Swagger, and then generate the API gateway specification dynamically. AWS allows importing an openapi specification to define an API gateway, and it supports specific AWS directives to define endpoints, and authorizers (security). As this information is dynamic (mainly the endpoints), we do not want to add it to the otherwise static microservices definitions, but add it to the final specification at some later stage.
Here it comes a new utility, openapi4aws that does exactly that: it requires one or more files to update and a definition of the authorizers and all possible endpoints, and overwrites those files to include the full AWS information. This way, it is possible to re-import the specification to a working API gateway without having to manually / procedurally define its integration with the backend implementation
It is available in maven central, and the source is available in Github with an open source MIT license.
8th Dec 2021Windows: Git pull error on ssh-rsa
Trying to update today my AWS CodeCommit repository in my Windows virtual machine suddenly stopped, with the following error:
Unable to negotiate with 184.108.40.206 port 22: no matching host key type found. Their offer: ssh-rsa fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists.
The access is done via ssh, so I thought there was an error on my .ssh/config file, or perhaps I had updated my ssh key in AWS and forgotten to download it to this box. After many checks, everything was fine, yet I couldn't git pull.
Stack overflow had the solution, it was needed to change the .ssh/config entry to look like:
Host git-codecommit.*.amazonaws.com IdentityFile ~/.ssh/aws User .... HostKeyAlgorithms +ssh-rsa PubkeyAcceptedKeyTypes +ssh-rsa
(Adding the last two lines)
7th October 2021sudo docker cp
Last time (I hope) that the command
sudo docker cp
bites me. To access docker I need sudo access, and the files are copied in the target folder, owned by root. Not only I cannot likely access them, but if any folder within does not have a+w permissions, I cannot remove them either, as my sudo access is only limited.
Thanks to this stackoverflow link, I can use now instead the following python script to just copy from container source to host target:
from subprocess import Popen, PIPE, CalledProcessError import sys import tarfile def main(source, target_folder): export_args = ['sudo', 'docker', 'cp', source, '-'] exporter = Popen(export_args, stdout=PIPE) tar_file = tarfile.open(fileobj=exporter.stdout, mode='r|') tar_file.extractall(target_folder, members=exclude_root(tar_file)) exporter.wait() if exporter.returncode: raise CalledProcessError(exporter.returncode, export_args) def exclude_root(tarinfos): for tarinfo in tarinfos: print(tarinfo.name) if tarinfo.name != '.': tarinfo.mode |= 0o600 yield tarinfo main(sys.argv, sys.argv)
6th October 2021Virtualbox on Macbook Pro
A client requires that I access my remote Windows machine through Citrix. At home I only have Linux or Mac computers, and whenever I login into Citrix, and from there launch the remote desktop, the keyboard mapping is just broken. Entering simple characters like a semicolon is a matter of trying the key several times: it will display a different character, until eventually the semicolon appears. You develop quite an artistic to use copy and paste...
However, if I use a Windows virtual machine in Linux, the mapping works. Mind you, it implies that I am using a virtual machine to access some cloud bridge that links me to my remote desktop. Beautiful.
But using a Windows virtual machine in the Macbook, using virtualbox, shows abysmal performance. Plus, the screen will appear very tiny, requiring upscaling it (Virtualbox settings, Display, Scale factor). The best solution I have found to overcome this is executing Virtualbox in low resolution:
- Open Finder, then go to Applications and select VirtualBox
- Right clicks VirtualBox and select Show Package Contents
- Select now VirtualBoxVM Under Contents/Resources
- Right click VirtualBoxVM and select Get Info
- Check the checkbox Open in Low Resolution
Unfortunately, it affects all the virtual machines, and the display looks definitely worse, but the performance becomes really acceptable. I get that using Parallels or Vmware fusion would be a more performant solution, at a price (plus I could not transfer my Windows license).
A detail that still killed me was I needed Windows just to launch the remote desktop in Citrix. But inside Citrix, ALT+TAB would just show me the processes in Windows, not those in Citrix. Likewise, all hotkeys would be captured by the Windows virtual machine, rendering them mostly useless. Citrix to the rescue: open regedit in the Windows virtual machine and set the value Remote for name TransparentKeyPassthrough in the following two keys:
- HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Citrix\ICA Client\Engine\Lockdown Profiles\All Regions\Lockdown\Virtual Channels\Keyboard
- HKEY_CURRENT_USER\SOFTWARE\Citrix\ICAClient\Engine\Lockdown Profiles\All Regions\Lockdown\Virtual Channels\Keyboard\
19th Sep 2021Cancelling tasks
When I developed my first Intellij Idea plugin here, I did the usual customary checks, and, pressed to share it with my team, I uploaded it to the Idea market place
The logic of the plugin is quite simple: read a file, obtain a token by invoking the aws binary with specified parameters, and then updating the initial file. These tasks are executed normally very quickly, but the modal dialog that shows the progress included a Cancel button, which I failed to handle. My initial thought was that the Idea runtime would just cancel the progress and stop my background thread. Which is definitely wrong: Java had initially a thread.stop method which was quickly deprecated. Although it can be used, it should not be used, and the Idea runtime definitely would be quite poor if it used it.
So Intellij Idea does the obvious: sets a flag if cancel is pressed, and it is the thread's task to check it regularly. Definitely a better solution, although the resulting code is uglier, invoking at almost every step a isCancelled() method.
And once I had to invest some time to have this running, I decided to decorate the plugin: in my rush, the initial version was just a bunch of labels + input fields. The presence of an image make it somehow better (or so I believe):
4th July 2021Backing up Thunderbird
Each time aI setup a new computer, I manage to have it up an running with my preferred configuration by following very well defined steps, and taking advantage that most of my development is based on git and I just need to clone my personal repositories.
I have added now as well the instructions to replicate a Thunderbird profile, so the same accounts exist directly on the new setup.
26th June 2021Corsair 280X Build
Last time I built my own computer was in 2012. That was a quad core i3770 with 32 Gb that has served me well since then. I do very little gaming (just Civ V) and most processing is handled in some remote machine, so I haven't seen the need to upgrade to a newer machine. But some operations start indeed to seem quite slow, and the PC can be noisy enough to be noticed as soon as something requires some CPU power. Adding to this some problems with the video output, I decided to get a new build.
Nothing fancy: a Ryzen 5600x, and favouring small cases, I went for a matx mobo, the Asrock B550M Pro4 on a Corsair 280X box. Back in 2012, LianLi was the case to have, all brushed aluminium, high quality. This time, I was rooting for the LianLi PC-011D mini, but I had already purchased the Corsair case for a friend' build that finally hadn't happened, so I decided to use it for my build.
My previous build used a LianLi PC-V351 case, a small evolution of the PC-V350 that I had already used previously. These are nice cases, but not nice cases to tinker with. Opening it requires definitely a screw driver -6 screws to open any of the side panels-. Reaching the hard drives case could be done without screwdrivers, but fighting the connection of a small fan sitting behind. Any PCI card modification required opening the case totally, taking the motherboard out -all wires out-, and re-building it. Nice case, but nightmarish.
The Corsair 280X is 40% bigger: just a bit wider, less deep, and full 10cm higher: it looks bigger, but just slightly, until you start building the PC and realize how much space you have. And how well is all organized, and how well built is the case. It includes two fans that are totally silent.
I had purchased a Noctua cooler to replace the Wraith Stealth cooler that comes with the Ryzen 5600X, and thought initially on returning it: the default cooler has a distinct noise, but I thought that once closed the case, with its fans, you would not hear it. Then I mounted the Noctua NH-L12S, and I could not really know when the system was on or off, even when the case was still open! Kudos as well to the power supply, the also silent be quiet! Pure Power 11.
The only thing that bummers me about the build is a detail on the motherboard: the second M2 slot does not have all the lanes it should, so any PCI3 SSD you place there will run at lower speeds. I bought a cheap NVMe - PCIe adapter for 14 euros, and my measures are:
|Average read||Average write||Access time|
|PCIe4 M2.1 slot||3.5 Gb/s||615.3 Mb/s||0.02 ms|
|PCIe3 M2.2 slot||1.6 Gb/s||615.3 Mb/s||0.02 ms|
|PCIe adapter||2.9 Gb/s||605.8 Mb/s||0.02 ms|
So, same access time and average write, but definitely better using the additional adapter than the M2.2 slot. Which is therefore useless.
The only doubt I have now is that the case is beautiful, easy to serve, but mostly empty. How better a mini-itx build could have been...
9th May 2021CodeArtifact + Maven Idea plugin
In a new project, we needed to publish artifacts to a repository in AWS, using CodeArtifact. Instructions to do so are very clear, but they are a poor fit when using an integrated environment as Intellij Idea.
Basically, you would need to update an environment variable with an authorization token obtained by invoking a aws cli command. After setting the environment, you would need to launch the IDE -from inside that environment-. And aChs the token needs to be refreshed, it is needed to quit the IDE and repeat the process every 12 hours.
The solution was easy: instead of referring in the maven settings file to an environment variable, include there directly the received AWS token. And, to automate the process, better than using an independent script, why not having a Intellij Idea plugin that implements exactly this?
6th March 2021TableFilter v5.5.3
New release for this Java library, with a small patch covering a rare null pointer exception.
The source code and the issues management is now hosted in Github, and of course, the code has moved from Mercurial to Git. It is a funny coincidence: I started using SVN for the version control, and moved to Mercurial the 5th May 2010, using Google Code to host the repository. Exactly five years later, the 5th May 2015, I had to move from Google Code to Bitbucket, and almost exactly other 5 years later, the 7th May 2020, I have completed the migration from Bitbucket to Github. Which means ten years and two days of good Mercurial support. Shame that Bitbucket is kicking it out...
20th February 2021Recovering Touch Id on Macbook pro
In Summer 2019, my daughter gave me a badly closed bottle of Fanta, with I placed in my laptop bag, together with my Macbook Pro. A short while later, I learnt two things: that my bag was quite watertight and that the macbook was a well built machine that had survived unscathed the Fanta puddle experience.
This happened during a flight to some holidays, where I used little or less the laptop, but eventually I realized that my fingerprint was not recognized anymore. Somehow I linked both experiences together, assuming that the liquid has affected / broken the T1 chip that handles the Touch Id. However, this seems a faulty theory, as T1 is used for other things -like driving the Touch bar' screen, which still worked fine.
I tried the possible options to get it working again. I could not remove existing fingerprints, resetting the NVRAM helped nothing, and a problem reported by other users -removing Symantec Endpoint protection- was definitely not my problem.
The only unproven solution was reinstalling MacOs. I had bought my laptop with Sierra installed, I had dismissed High Sierra and installed Mojave at some point, but didn't see any benefit on installing Catalina. Now, I was 30 months late, and Big Sur was calling, so I decided to go the hard way and install it from scratch, as a last try to get Touch Id working again.
And it did it. I am happy to have Touch Id working again, but dismayed to know that it can fail again -Fanta likely notwithstanding, and there is no obvious way to get it working again, except for a full re-installation.