Cancelling tasks

codeartifact+maven iconWhen I developed my first Intellij Idea plugin here, I did the usual customary checks, and, pressed to share it with my team, I uploaded it to the Idea market place

The logic of the plugin is quite simple: read a file, obtain a token by invoking the aws binary with specified parameters, and then updating the initial file. These tasks are executed normally very quickly, but the modal dialog that shows the progress included a Cancel button, which I failed to handle. My initial thought was that the Idea runtime would just cancel the progress and stop my background thread. Which is definitely wrong: Java had initially a thread.stop method which was quickly deprecated. Although it can be used, it should not be used, and the Idea runtime definitely would be quite poor if it used it.

So Intellij Idea does the obvious: sets a flag if cancel is pressed, and it is the thread's task to check it regularly. Definitely a better solution, although the resulting code is uglier, invoking at almost every step a isCancelled() method.

And once I had to invest some time to have this running, I decided to decorate the plugin: in my rush, the initial version was just a bunch of labels + input fields. The presence of an image make it somehow better (or so I believe):

Backing up Thunderbird

thunderbird iconEach time aI setup a new computer, I manage to have it up an running with my preferred configuration by following very well defined steps, and taking advantage that most of my development is based on git and I just need to clone my personal repositories.

I have added now as well the instructions to replicate a Thunderbird profile, so the same accounts exist directly on the new setup.

Corsair 280X Build

corsair 280x caseLast time I built my own computer was in 2012. That was a quad core i3770 with 32 Gb that has served me well since then. I do very little gaming (just Civ V) and most processing is handled in some remote machine, so I haven't seen the need to upgrade to a newer machine. But some operations start indeed to seem quite slow, and the PC can be noisy enough to be noticed as soon as something requires some CPU power. Adding to this some problems with the video output, I decided to get a new build.

Nothing fancy: a Ryzen 5600x, and favouring small cases, I went for a matx mobo, the Asrock B550M Pro4 on a Corsair 280X box. Back in 2012, LianLi was the case to have, all brushed aluminium, high quality. This time, I was rooting for the LianLi PC-011D mini, but I had already purchased the Corsair case for a friend' build that finally hadn't happened, so I decided to use it for my build.

My previous build used a LianLi PC-V351 case, a small evolution of the PC-V350 that I had already used previously. These are nice cases, but not nice cases to tinker with. Opening it requires definitely a screw driver -6 screws to open any of the side panels-. Reaching the hard drives case could be done without screwdrivers, but fighting the connection of a small fan sitting behind. Any PCI card modification required opening the case totally, taking the motherboard out -all wires out-, and re-building it. Nice case, but nightmarish.

The Corsair 280X is 40% bigger: just a bit wider, less deep, and full 10cm higher: it looks bigger, but just slightly, until you start building the PC and realize how much space you have. And how well is all organized, and how well built is the case. It includes two fans that are totally silent.

I had purchased a Noctua cooler to replace the Wraith Stealth cooler that comes with the Ryzen 5600X, and thought initially on returning it: the default cooler has a distinct noise, but I thought that once closed the case, with its fans, you would not hear it. Then I mounted the Noctua NH-L12S, and I could not really know when the system was on or off, even when the case was still open! Kudos as well to the power supply, the also silent be quiet! Pure Power 11.

The only thing that bummers me about the build is a detail on the motherboard: the second M2 slot does not have all the lanes it should, so any PCI3 SSD you place there will run at lower speeds. I bought a cheap NVMe - PCIe adapter for 14 euros, and my measures are:

Average read Average write Access time
PCIe4 M2.1 slot 3.5 Gb/s 615.3 Mb/s 0.02 ms
PCIe3 M2.2 slot 1.6 Gb/s 615.3 Mb/s 0.02 ms
PCIe adapter 2.9 Gb/s 605.8 Mb/s 0.02 ms

So, same access time and average write, but definitely better using the additional adapter than the M2.2 slot. Which is therefore useless.

The only doubt I have now is that the case is beautiful, easy to serve, but mostly empty. How better a mini-itx build could have been...

CodeArtifact + Maven Idea plugin

codeartifact+maven iconIn a new project, we needed to publish artifacts to a repository in AWS, using CodeArtifact. Instructions to do so are very clear, but they are a poor fit when using an integrated environment as Intellij Idea.

Basically, you would need to update an environment variable with an authorization token obtained by invoking a aws cli command. After setting the environment, you would need to launch the IDE -from inside that environment-. And aChs the token needs to be refreshed, it is needed to quit the IDE and repeat the process every 12 hours.

The solution was easy: instead of referring in the maven settings file to an environment variable, include there directly the received AWS token. And, to automate the process, better than using an independent script, why not having a Intellij Idea plugin that implements exactly this?

This plugin is defined here, already published in the public Idea plugins repository, and available in Github

TableFilter v5.5.3

table filter iconNew release for this Java library, with a small patch covering a rare null pointer exception.

The source code and the issues management is now hosted in Github, and of course, the code has moved from Mercurial to Git. It is a funny coincidence: I started using SVN for the version control, and moved to Mercurial the 5th May 2010, using Google Code to host the repository. Exactly five years later, the 5th May 2015, I had to move from Google Code to Bitbucket, and almost exactly other 5 years later, the 7th May 2020, I have completed the migration from Bitbucket to Github. Which means ten years and two days of good Mercurial support. Shame that Bitbucket is kicking it out...

Recovering Touch Id on Macbook pro

macbook pro iconIn Summer 2019, my daughter gave me a badly closed bottle of Fanta, with I placed in my laptop bag, together with my Macbook Pro. A short while later, I learnt two things: that my bag was quite watertight and that the macbook was a well built machine that had survived unscathed the Fanta puddle experience.

This happened during a flight to some holidays, where I used little or less the laptop, but eventually I realized that my fingerprint was not recognized anymore. Somehow I linked both experiences together, assuming that the liquid has affected / broken the T1 chip that handles the Touch Id. However, this seems a faulty theory, as T1 is used for other things -like driving the Touch bar' screen, which still worked fine.

I tried the possible options to get it working again. I could not remove existing fingerprints, resetting the NVRAM helped nothing, and a problem reported by other users -removing Symantec Endpoint protection- was definitely not my problem.

The only unproven solution was reinstalling MacOs. I had bought my laptop with Sierra installed, I had dismissed High Sierra and installed Mojave at some point, but didn't see any benefit on installing Catalina. Now, I was 30 months late, and Big Sur was calling, so I decided to go the hard way and install it from scratch, as a last try to get Touch Id working again.

And it did it. I am happy to have Touch Id working again, but dismayed to know that it can fail again -Fanta likely notwithstanding, and there is no obvious way to get it working again, except for a full re-installation.

Setting up Ubuntu, 20/04 edition

Ubuntu configurationFor the LTS editions of Ubuntu, I prefer to start with a new slate, copying my SSD disk to some external backup, and performing a destructive installation, erasing the whole SSD. And as this means having to reconfigure Ubuntu completely, I keep my record of all the steps I take

I have updated my Ubuntu setup to list these steps: how to configure workspaces, how to define my shortcuts, how to make Guake start at runtime, etc.

New optmatch version, first non beta

pythonVersion 1.0.0 of optmatch is now available on pypi.

After two years without issues, I took advantage of the need to update the documentation to include the new references to Github hosting to update the version to non beta.

Moving from Bitbucket

table filter iconExactly five years ago, almost to the day, in May 2015, I had to move my source repositories from Google Code to Bitbucket.

At that moment, Google Code recommended Github as hosting replacement, but I preferred to keep using Mercurial. Five years later, all my professional experience relates to Git (with some usage of SVN and even still CVS), and I can definitely understand the shift on focus in Bitbucket, deprecating any Mercurial usage.

However, Bitbucket's total lack of support on how to migrate existing projects to Git is definitely a regrettable attitude after these years of great support. There are good tools to support the migration from Mercurial to Git -I used myself fast-export, but Bitbucket does not support the conversion of a repository from Mercurial to Git, even if the upload of the converted files were a manual process. And deleting the repository and creating a new one would mean the loss of the project's issues.

I decided to move to Github, and the move was helped with a tool to migrate the issues directly from BitBucket to Git. I tried using this other tool at first, but it gave me too many problems. In any case, they are proof of the issues that many people are having with the demise of Mercurial support at Bitbucket...

Farewell, BitBucket.

TableFilter v5.5.2

table filter iconNew release for this Java library, with no functionality changes at all, just required to update the documentation and links after the move from Bitbucket

The source code and the issues management is now hosted in Github, and of course, the code has moved from Mercurial to Git. It is a funny coincidence: I started using SVN for the version control, and moved to Mercurial the 5th May 2010, using Google Code to host the repository. Exactly five years later, the 5th May 2015, I had to move from Google Code to Bitbucket, and almost exactly other 5 years later, the 7th May 2020, I have completed the migration from Bitbucket to Github. Which means ten years and two days of good Mercurial support. Shame that Bitbucket is kicking it out...

Internal GPU

Cube computer upgrade The Intel HD Graphics 4000 included in my aging i3770 CPU cannot handle the 3840x1600 resolution of my new monitor, so I had to shop for a new GPU card. I believe I should I have stayed with my initial choice, a passive GT1030, but at the end I choose the max power than my 400 watts PSU could feed: a GTX 1650 super.

I had expected a easy setup, disabling the internal GPU and initialize only the PCI express GPU, but my motherboard was crashing continuously. At the end, the only BIOS settings allowing me to boot was having the internal CPU enabled and setting the GPU initialization to Auto or IFGX (internal).

Using Ubuntu 20.04, no configuration is needed to use simultaneously the internal GPU and the Nvidia card, all worked flawless immediately when attaching a monitor to each GPU.

Spoiled geek

monitor size comparisonSpoiled geek: geek who orders a 38" monitors and, on first impression, thinks: is not THAT big.
Funny part, I wrote in 2011 on the same impression I had with a 30" monitor, the Dell U3011,

But, in fact, this time I think it is the right impression. It is very wide, about 20 cm wider than the 30" monitor, but it is also narrower: 5 cm smaller, but with small bezels, so it just translates to 3 cm less display height. Which I don't miss: it is easier (at least to me) the horizontal neck move than the vertical one.

My original intention was to get a 34 curved wide monitor. Glad I decided this size, and in fact, I am left wondering on the much bigger, but equally priced Dell U4919DW. The last one is much more curved, but it is also (3 cm) narrower, with 30 cm more width than my 38" selection. This made me think that the ergonomics were quite a bit skewed. Plus, I was considering a move from 30" to 34" and then to 38", so a 49" monitor was really one step too long. But perhaps the extra curvature helps with the ergonomics, perhaps up-sizing to 49" would have been the good, bold move...

DD-WRT issues

dd-wrtIn the last 24 months, I have added three entries to this blog. As all procrastination woes go, there is an excuse: I intended to move my static site generation from my own custom solution to Hugo, but I didn't dedicate it time enough and the migration never happened, but on the meantime I would just not write more entries...

My custom blog solution is a C# program I wrote almost 15 years ago, so I fully intend to move to Hugo or else in some future time. It works fine, and pretty fast (a few seconds for this site), but would take me many hours any single modification. I had even also a python solution to synchronize my local folders to my server's location, which at some time in the last 15 years was replaced by a simple rsync script, which, of course, expect some direct connection to the port 22 on my dynamic public IP address. For security reasons, that port is closed on my home router, and I only open it when required.

As complexity increases with time, my previous port opening moved to... multiple ports opening, as I had decided to secure a part of my home network. But trying to access the final router, where I had flashed DD-WRT, I couldn't remember the password, and it was not available in my KeePass files, so after many many logon attempts, I had to reset the firmware. And, for the sake of it, upgrade the router.

But upgrading, or installing the DD-WRT firmware is not simple. If you visit its wiki, you receive a big warning to not use the router database. It offers two alternatives: Kong builds, which additional testing are not available since July 2019, and beta builds, which, by definition, seem a coin toss.

After upgrading, I had still network access, but the Wifi access was gone. SSIDs were broadcast, radios were on, but my phone couldn't see them. Funny point, the DD-WRT Status page offers a gadget to scan the network, but it was asking me to enable the Wireless network -of course, giving no hints on how to do so...

Solution was the usual way: reboot the router, and the 2.4 GHz SSID appeared in my phone. To get also the 5 GHz SSID I had to move the wireless channel from Auto to a specific channel. My router is a Netgear WNDR 3700, its last official Netgear firmware is dated on 2011. I am definitely glad the only issues I had with the DD-WRT firmware were so easy to solve.

TableFilter v5.5.1

table filter iconNew release for this Java library, implementing some requested functionality regarding the automatic hiding of filter popups during table model updates.

It was on the pipeline for two months, but other things kept popping up and delaying this release.

New optmatch version

pythonVersion 0.9.2 of optmatch is now available on pypi.

It solves the issues raised so far, and it cleans up some implementation details

Argument parsing in python

pythonAlmost 10 years ago, I thought about implementing a new solution for arguments parsing in python.

Basically, existing solutions (now and then) instruct a parser on the expected options and flags, and the parser handles the arguments, catching any errors and producing a beautiful help summary automatically. The major issue is the flexibility of the parser to define the correct arguments syntax: for example, whether it is possible to specify incompatible options, flags, etc.: two options --verbose and --quiet can be incompatible, or action scan could require a mandatory --source option. Additionally, if the parser is flexible enough to provide this functionality, it can be quite difficult to program, or to make changes at a later stage.

My idea was to define class methods that could handle each possible combination of arguments, and the parser would extract that information without further effort from the programmer. That is: express what each operation requires, and expect that the parser would handle all the required logic:

class Example(OptionMatcher):

	def handle_common_flag(self, mail_option):

	def handle_compression(self, file, compress_flag=False):

	@optmatcher(flags='verbose', options='mode')
	def handle(self, file, verbose=False, mode='simple', where=None):

It does not only simplifies arguments handling, and expresses clearly the purpose, but adding / removing options or flags or operations is in fact very easy

I implemented this solution shortly after, and then started adding functionality as required. Looking to the history of the project, I invested around 10 months on this projects, although I do not remember anymore the associated effort -the file has just 800 lines of code, plus comments, plus tests and documentation-.

And then I used it in some of my projects, a few people contacted me about it, and it was included in a some other projects, but it was definitely not a success. In fact, when I needed to do arguments parsing on scripts at work, I would normally default to a standard solution argparse

A few weeks ago I was once again in the dilemma of creating a minor personal application, and I started using argparse. Soon I got into the familiar territory of having incompatible options, flags for only specific actions, and to express different actions under the same script. This meant adding code after the parsing to handle all these issues. And, after a while, adding a new option required quite a lot of effort just to handle all properly, so I remembered my own unloved library, and decided to give it a try, again.

And that was it, like falling for some old love :-)

So I have spent a few hours updating the library -it was only supporting camel case notation on parameters, and now it supports the more standard underscores-, and, more importantly, uploading it to the standard Python Package Index (PyPI), so it can now be easily installed as:

pip install optmatch
. Even better, support for python 2 and 3 is now included in a single file.

Bollocks UI

bollocks interfaceI read this interesting article on the (wrong) effect of using flat UI, where users are found to require longer times to grasp the meaning of the UI elements in web pages, delaying therefore their actions.

Bollocks, but not so much as trying the latest Nautilus interface, the default file navigator in Ubuntu 17.04. Creating a new folder requires double clicking on the folder contents and selecting the firt option (New folder). But note that selecting the parent folder and right clicking shows a menu where the option to create this new folder does not appear. Furthermore, Nautilus has an application menu, but the option to create a menu is all but missing.

When you right click on the folder content, the first option is indeed 'New Folder', and it displays very handily its shortcut: Shift+Ctrl+N. BUT: what happens if the folder contains too many files/sub folders? There is no way to right click on empty content, and there is no way to create a new folder except by knowing the shortcut.

I guess that one or more developers went just too far on their quest to simplify the interface.

OS Agnostic

macos againYep, after 6 months in Ubuntu land, the Wayland switch -or the associated bugs- dropped me back into the arms of Hackintoshing

Not happy about it, but Wayland is at the moment no go in my configuration; and MacOs installation was really simple, just a couple of hicups. The whole installation process is described here

Switching operative systes is now an almost painless process. I do not rely on the cloud to store my files -just a few ones, which in fact host most of my configuration information. As a result, installing a new operative system normally implies:

  • The OS installation itself -30 minutes for Ubuntu, a few hours for Hackintosh
  • OS Configuration, a 15 minutes process once I have it correcly documented
  • Programs installation, a 60 minutes process, using software centers and command line
  • Copying my common folders, either from the old drive or from a backup, which takes up to 3 hours-.

Ubuntu 17.10

wayland suicideSix months ago, after repeated issues with Hackintosh, I moved to Ubuntu 17.04, and all was well.

Then it came Ubuntu 17.10, with the move from Unity, and the introduction to Wayland, and man, was that a big change. It was supposed to be, like every last Ubuntu releases, a minor upgrade, almost more of an update than a real move.


But. That workspaces do not work anymore as they used to do with Unity, that shortkeys seem to only work randomly, that Steam seems much slower and it seems to exhibit a gap between the cursor pointer and the exact pointed location are for me just inconveniences; but that whenever the monitor goes to sleep, Wayland performs a swift Harakiri, with all X programs getting killed, is a major showstopper. I cannot justify all the lost time.

What is special on my setup to produce such an outlier crash? No idea, I use a embedded Intel HD 4000 solution with DisplayPort connection; I have tried disabling DCI on the monitor, without success. For me, Ubuntu 17.10 is, on this machine, no go.

I have filled a bug for this problem , but no solution or activity so far...

I could stay with 17.04, but that means a very short security period with upgrades. I could try installing 16.04, a LTS solution, or, as I have done, let the upgrade Ubuntu 17.10 in place, hoping for updates to solve my issue, and have a new Hackintosh try, on a separate drive. Let's see how the fight Ubuntu Wayland vs Hackinstosh works for me in the close future...

Note: on 28th November, I have switched back to Ubuntu, upgraded, dist-upgraded and rebooted. But Wayland persists on its suicidal ways....

DisplayPort on Ubuntu

monitorThere is an ongoing issue with using monitors in Ubuntu connected via DisplayPort: monitor shows black screen and "no input signal" after turning the monitor off and on manually

Turning off a DisplayPort-connected monitor is treated the same as disconnecting it, and somehow X11 does not recover from this. I have seen this error related to Nvidia and Radeon cards, but in my case I have a Intel HD4000, and the error is exactly the same.

And it happens with just all kind of monitors, including my Dell U3011. A proposed solution is to disable DDC/CI on the monitor itself, but this didn't solve anything for me.

A solution I have found is to press Ctrl+Alt+F6 (or +F5, etc), to open a TTY console, and then pressing Ctrl+Alt+F7 to get back to X11. But it works sporadically, sometimes having to press these keys several times, or creating new TTY consoles, like pressing Ctrl+Alt+F2 to create a TTY2 if the 6 had been already created before.

Other provided solution is to ssh from another machine and run:

env DISPLAY=:0 xset dpms force off
env DISPLAY=:0 xset dpms force on
In this case, the best option is to create a shell script /usr/local/bin/ with this content, open Settings, Keyboard, Shortcuts, and create a custom shortcut (in my case, Ctrl + Alt + W). This solution works always, but there is a catch: the shortcut is only available when the user is logged in, so if the system is asking the user password to unlock the screen. the shortcut will not work. In this case, it is needed to enter blindly the password, press Enter, and then press the shortcut.

Moving Ubuntu to separate disk

driveAfter a few weeks with my new Ubuntu installation, I was able to do all my usual tasks without missing OsX. So far, I have found only two issues: display port monitor not awaking some times, and some crashes in Steam.

But the main issue has appeared when trying to it setup for Android development... and running out of space. So, the original hackintosh disk, which was in standby, had to go, and the idea was to clone my existing installation to the other disk.

So I reformatted the hackintosh drive, just to find that I had removed the EFI partition that booted the Ubuntu system... My solution was to launch the Ubuntu installer, and install Ubuntu again on the new drive (the previous Hackintosh drive), taking care of having one EFI partition, plus a big ext4 one. Once I had Ubuntu installed, I launched again the installation system, mounted the old partition under /media/old, and the new one under /media/new, and then copied all the important files:

sudo cp -R --preserve=all bin/ etc/ home/ lib/ lib64/
            opt/ root/ run/ sbin/ usr/ var/ /media/new/

Then edit the file /etc/fstab to change the UUID of the disks, and presto!

Setting up Ubuntu

Ubuntu configurationSetting up a Ubuntu computer seems to be my fate of late. And each time I do it, I need to do the same Google searches: how to configure workspaces, how to define my shortcuts, how to make Guake start at runtime, etc

So I have collected all the steps I take to configure a Ubuntu machine, from a generic point of view, not describing all the applications I finally use, but definitely including all details to configure Ubuntu as I like it. It seems weird to show these steps for a Ubuntu 16.10 installation on the same day that Ubuntu 17.04 is published, but I will definitely comment on any changes.

Ciao, Hackintosh

ciao hackintoshI have been setting up my computers as Hackintoshes since 2009; currently I have a Mac laptop, with the latest Sierra installation, and a Dell laptop, running Ubuntu 16.10, plus two desktop computers, one running Windows Vista and Snow Leopard (yep, both run still perfectly fine), and the other with MacOs Sierra. At work I use exclusively Linux, and I had been wondering for a long time on my reasons to keep my Hackintoshes at home.

As of last weekend, this question has been answered, fare well, Hackintosh. I will keep the old desktop running Snow Leopard, and my Apple laptop running MacOs, but I definitely see no point on not using Ubuntu as my first OS choice. My choice is rather simple to do, as none of the programs I use lack a Linux version, with the exception of Evernote, which I can still handle via its Web interface.

Batch editing Google contacts

google contactsThe new Google contacts application is very nice. It looks great, and it offers good functionality, like merging of duplicates. Editing contacts works perfectly, but it can only be done one contact at a time. I was migrating contacts from a non smartphone, and the migration had converted them into an ugly 'Family name; surname' format, such as 'Trumpy; Donald'. And as slick as the user interface is, editing by hand over 300 contacts was a boring perspective.

Automation to rescue: export contacts, process them with a python script, import them again. Do, in Google Contacts, press 'More', then export, then read the popup warning and head to the old Google Contacts application, as the new one seems to be unable to export the contacts. Press again the More button, again on export, and choose Google CSV format.

The script to convert the contacts is as simple as:

 import csv
 with open('google.csv', newline='', encoding='utf16') as f:

 for contact in contacts:
     colon = contact[1].find(';')
     if colon > 0:
         name = "%s %s" % (contact[1][colon+1:], contact[1][0:colon])
         contact[0] = contact[1] = name

 with open('google-out.csv', 'w', newline='') as f:

This reads the exported google.csv file, and creates a new google-out.csv file; in the old Google Contacts application, remove now all contacts, and then initiate the Import process, passing the created file. Easy as pie.

The previous script shows a very basic transformation; the important aspects are: (1) it is needed Python 3 to run this script, as its CSV reader handles properly the unicode format. (2) The input format seems to be UTF-16 (it was definitely on my OsX machine). (3) However, it was reimported as UTF-8 without issues.

Finally! oh, no!

dell xps 13 I have a Macbook Pro 13" early 2011. I upgraded it manually, the memory to 8Gb and the hard disk to a SSD of 128 Gb (and later to other of 256 Gb). Battery is down to about 3 hours and I was planning to buy the revamped Macbook Pro as soon as it would be released. So I was one more of those Macbook Pro fans completely astonished to see what Apple was releasing.

Personally, the touch bar seems a total compromise: missing the touch screen (which I am neither fond of), lacking real useful buttons (wrong, the useful buttons seems to be always on on the touch bar), and looking like the Apple way to say 'Look, we are innovating!'

My macbook is RELIABLE; has come with me to really remote areas in Bangladesh, Philippines, Benin, Eritrea or Mozambique, and it has answered like a pro. The available ethernet connection has saved me more times than I could count. The magsafe connector has very probably avoided a few accidents and when/if it gets even retired, my macbook pro should get its own urn.

What is offering now Apple? Only USB-C (oh, well), no magsafe (sigh!), no ethernet connector (augh!), touch bar (oh, my, my), solded memory and storage ($*&#^$!)? Together with an old CPU (okaysh, but not very okaysh), and the limitation of 16 Gb (yeah, I know, more memory would impact battery time). And then, the butterfly keyboard, which is unpleasant, and seems not that reliable. See that I do not even mention the price; I would not mind paying the Apple tax for a worthy product!

My company provided me with a Dell XPS 13, developer edition, and I was thinking about hackintoshing it. I have 3 hackintoshes -all desktops-, and this was looking like a good way to get a proper Apple laptop without the limitations of the latest Apple models.

But is this the case? The XPS 13 has more connectors, but still lacks the one I consider important (Ethernet), which is just not possible with laptops this thick (but I do not care about their thickness!!). As in the Macbook, the memory is soldered, so you get what you buy, and that is also limited to 16 Gb. In Luxembourg, the only XPS 13 Linux with 16 Gb memory comes with a 512 Gb SDD (256 would have been enough for me, specially when it can be upgraded), and with the touchscreen at great resolution (basic 1920x resolution is enough for me, specially when i t improves battery life,and touchscreen is something that I still need to find useful), it costs, with core i7-7500u processor, 1800 euros, with an ongoing promotion down to 1600 euros. It has approximately the same size as the Macbook Pro, and for me, the same limitations.

Screen-wise, the Dell has higher resolution, and the Apple better brightness -which at this size it means for me a bonus point for the Apple-. The Dell's keyboard is better, and the trackpad works good, but it cannot be compared with the Macbook pro. I like / love linux, but the desktop experience is subpar to MacOS, and the full integration with the hardware means that even when the Macbook battery is smaller, its performance would be normally better that on the Dell.

The Macbook Pro, 13 inches without the silly touch bar costs 1875 euros. This provides a 256 Gb SSD, enough for my taste, but goes up to 2100 euros for the 512 Gb SSD. The core i5 processor (i5-6360U), can be upgraded to (i7-6660U), for 'just' 350 euros, raising the total price to 2445 euros. So, the Dell XPS13 with a newer i7 processor costs 645 euros less, and with the current promotion, 800 euros less, a cool 33% cheaper. I find in fact a better approach opting for the touch bar version, and staying with the core i5 version -the i7 is anyway only dual core-. For 'just' 2165 euros you get 16 Gb memory, 256 Gb SSD and a core i5 a 2.9 GHz, plenty speedier than the non touch-bar one (hint: Apple wants you to get the touch bar)

Personally again, I think that the Apple is all but Pro. I do not think that the Dell is more Pro at all, just cheaper for the same ambitions. If I would go now the Apple way, I think I will try a real 4-core machine, on its 15" envelop. The basic 15 inches gives a 4 core processor, 16 G memory and 256 Gb SSD for 2600 euros, including a discrete graphics card in the mix.

Or, well, I will just give up on these manufacturers, and enjoy my Dell for the time being, perhaps I will hackintosh it for the pleasure of it, and I guess I will still invest some money on Apple stock... and on a new battery for my reliable Macbook 2011 (real) pro.

Ubuntu 16.10 on Dell XPS 13 (9350)

dell xps 13I got a Dell Xps 13 recently, the developer edition that comes with Ubuntu. It is not the latest model (9360), with the Kaby Lake processor, but the previous one (9350) with Skylake. These two versions differ not only on the processor, but also on the Wifi card and the Ubuntu version; while the 9360 model comes with Ubuntu 16.04 installed, Dell does only provide Ubuntu 14.04 for the 9350 model.

Ubuntu 16.04 includes Linux kernel 4.4, with incomplete support for Skylake processors; kernel 4.6 included specific support for Dell XPS 13 systems, and Ubuntu 16.10 comes with kernel 4.8. Installing Ubuntu 16.10 on this laptop is a no-brainer, all works fine directly.

My laptop is the model with touchscreen and high resolution (3200x1800); the fonts appear too little, so it is better to go to settings / displays and choose a 2x scale (or any scale at will).

The other problem that appears with the touchscreen is that Ubuntu installs by default a library implementing braille support. Entering

sudo tcpdump -i lo
shows a lot of traffic on the loopback interface from port 4101. To disable it:
sudo apt-get purge brltty xbrlapi
(which an additional reboot)

Although everything works fine, there are some quirks. For example, the touchscreen works until the machine goes to sleep with the closed lid. To have it working again, it is needed to close-reopen quickly the lid.

TableFilter v5.3.0

table filter iconNew release for this Java library, implementing some requested functionality: the possibility to filter entries that contain some text -initially, the default operator would only display entries starting with a given expression

VirtualBox guest additions on headless

virtualbox Normally, I run virtualbox machines in headless mode, so it is useless installing an OS with full GUI. I favor in this cases a debian installation (minimal, using the netinst CD), and ensuring that PAE/NX is enabled in System/Processor.

It is still helpful to install the virtualbox guest additions to improve the performance, but the usual way -Devices/Install Guest Additions CD Image...- doesn't work. In this case, the best procedure is to download them and perform a manual install. For the current version, 5.0.16:

cd /tmp
mkdir iso
sudo -s
apt-get install -y dkms
mount -o loop VBoxGuestAdditions_5.0.16.iso iso
sh iso/ --nox11
umount iso
rm -Rf iso VBoxGuestAdditions_5.0.16.iso

This will install the extensions, and produce a final warning:

Could not find the X.Org or XFree86 Window System, skipping.

This warning is okay. It is possible to check if the additions are installed by invoking:

lsmod | grep vboxguest


rclone A few years ago, I setup a poor man backup system for a site using rsync and Dropbox. Eventually, the database and associated files required more than 2 Gb, and still following the poor man habits, I had a look at ; unfortunately, Google does not have an official headless linux client.

But there are several unofficial clients. Like gsync, which tries to provide rsync functionality, being still severely limited. I opted better for other client, rclone, which does not only support GDrive, but also Dropbox, Amazon S3, Backblaze, etc. The documentation is very complete, the setup really simple, and its functionality covers all my scenarios.

Installation instructions only cover directly Linux, and for some reason it puts the executable into sbin, which is all but useful -the idea is to be executed, with different credentials, by each user. Finally, I used the following instructions for Linux:

cd /tmp
unzip cd

sudo cp rclone-v1.28-linux-amd64/rclone /usr/local/bin
sudo chown root:root /usr/local/bin/rclone
sudo chmod 755 /usr/local/bin/rclone

#install manpage
sudo mkdir -p /usr/local/share/man/man1
sudo cp rclone-v1.28-linux-amd64/rclone.1 /usr/local/share/man/man1/
sudo mandb
rm -Rf rclone-v1.18-linux-amd64*

And almost the same instructions for OsX:

cd /tmp

sudo cp rclone-v1.28-osx-amd64/rclone /usr/local/bin
sudo chown root:wheel /usr/local/bin/rclone
sudo chmod 755 /usr/local/bin/rclone

#install manpage
sudo mkdir -p /usr/local/share/man/man1
sudo cp rclone-v1.28-osx-amd64/rclone.1 /usr/local/share/man/man1/
rm -Rf rclone-v1.18-osx-amd64*

Two factor authentication with SSH

google authenticator icon This must be the best way to strengthen the security on your ssh connection for those cases where ssh keys are not available.

Tip copied from this arm-blog

Redmine on Debian

redmine icon Added instructions to install Redmine on Debian, using PostgreSQL and Nginx.

The main problem was in fact setting up correctly email support (and then, trying to scape the SPAM folders in Google).