indexwritingsjournal › 2020


Switching from Dropbox to Syncthing

I have been using Dropbox for a long time, possibly for 10 years or so. Over time I've been putting more and more of my home directory into it, since it's so convenient. At some point it grew to 100 gigabytes and was connected to 7 or 8 devices (not counting the API-only Android file managers). But over that time, I also saw the app and the service grow into a bloated mess instead of the simple sync tool that it once was.

Zawinski's law says that every program attempts to expand until it can read mail. Similarly, the fate of every cloud service is to morph into an enterprisey "collaboration tool" with features nobody asked for. Signing in to the Dropbox website immediately greets you with Paper, Apps, Spaces, Showcases, and a tiny area in the middle where your actual files are. (The only thing that isn't there is Photos, which was useful so they removed it.)

This all would be fine if it had been confined to just the web UI, but at least on Windows it wasn't. This blog post illustrates it well – the desktop client was a mess of promos, popups, notifications, and remember when it grew its own file manager app to replace Explorer? The last drop was when they started annoying me with deliberately non-dismissable "Dropbox Family" promos as if I wasn't giving them enough money already for Plus.

The Linux version fared better, since most likely none of the higher-ups cared about Linux, but on the downside, none of the higher-ups cared about Linux. The daemon worked without X11 (which is better than the Nextcloud client), but only if you knew where to download the separately-distributed CLI tool from. It would crash if you started it via nice. It would deliberately nuke its configuration if the inode number was different (like if you restored it from backups), and it would nuke itself for no reason if you were using XFS, and instead of fixing the issue they just made it refuse to run on anything but Ext4. (Though I literally just found out now that they began accepting ZFS/XFS/Btrfs again in v77, which is okay I guess.)

Don't get me wrong, aside from all those problems it was a useful tool that worked transparently most of the time, and they did work on things like the new sync engine (which greatly improved certain things on all operating systems), but more and more it felt like it just barely met the expectations. (On my server, I had to put ~/Dropbox on an ext4 loop image!) And at some point I had already started using Unison and Syncthing for a few other things – e.g. several GB of document scans which only needed to be synced between two computers – not to mention git-annex, and began really wondering if I could reduce just the amount of different sync tools in use.

At first I moved ~/Music to Syncthing from Unison – yes, I do keep all my favourite songs locally. Seeing it working well (most of the time), I then added a few "config" folders that previously had been symlinked into Dropbox. Now, a year later, I grew tired of running both side-by-side and went full in, first adding ~/Dropbox into Syncthing on my server, then gradually doing the same on every computer (pointing it at the existing files to avoid having to re-clone the entire 100 GB).

This did require some untangling of mysteriously desynchronized states, but overall it went well, and two weeks ago I cancelled my Dropbox subscription and completely removed it from all my machines. I've hit some new minor problems since then but still I have no intention of going back.

The good

The bad (and the meh)

There are, however, some issues with it – primarily with the core metadata sync engine that Syncthing uses.

Syncthing: 13 items in this directory are out of sync
Me: Which items?
Syncthing: None (page 1 of 2)
Me: Maybe you would like to go and sync them?
Syncthing: No

Sometimes, rapid changes can semi-permanently desync at least the displayed state. For example, my server still thinks my laptop is "out of sync" due to a foo.jpg.crdownload file (which Chrome renamed to foo.jpg long ago). On a few occassions I have seen a local folder being in "error" state because it couldn't pull some files which had already been deleted from the source, and it was because the local Syncthing received the batch of file additions; queued the downloads; and didn't process a later batch of deletions until it was done with the downloads. Whenever that happens, the only way to unwedge it is to create dummy files at those paths again.


Protocols of a different timeline. Gemini

There's still quite a bit of community around Gopher sites. It was a way to publish information before the Web took over, and it seems quite a few people in the "tilde-verse" still develop Gopher clients, maintain personal sites and publish "phlogs" via gopher:// today.

Gopher might be a way to escape the fancy ReactJS-based, resource-hogging web of today, but unfortunately it goes well into the opposite extreme – it's rather clumsy and only marginally better than hosting .txt files over anonymous-FTP (because indeed text files arranged in directories is about all you get).

But it turns out the world isn't standing still and they are building a new thing called Gemini, which aims to be "somewhere in the middle" between HTML and raw text. It's a whole new protocol which serves pages in a whole new markup format, but it's kept deliberately very simple. (You get some markup, but it's even more minimal than Markdown. You get hyperlinks, but they're not inline with text.) The main site of Gemini project is of course served via Gemini, but just as with Gopher and Finger, there are web gateways.

I actually kind of like the idea, and the various clients (at least in the screenshots) seem a bit more welcoming; they look like something you'd actually want to read articles in, rather than feeling like a dusty library catalog. Looking at the Motif-themed Castor screenshots actually reminds me a bit of VNs like lost memories dot net that are set in the late-Geocities era of the web – though in the case of Gemini, the style is all client-side.

* * *

Gemini always uses TLS. It's not a protocol meant for retrocomputing, and compatibility with obsolete operating systems is explicitly stated as its non-goal. I think that's a good choice for any new project.

I still deliberately don't impose HTTPS on the main website. Although it does fully support HTTPS (with TLSv1.2 enforced), it still remains accessible without, because compatibility with obsolete operating systems is a goal of this website. (Or in other words, because I'm vain and I try to load it on whichever old OS I've got installed.) For those of you reading this through Internet Explorer 5, a CDF channel is available!

There are various websites which offer to check your own domain for known issues, and they sometimes give you a single-letter grade. I found one recently, entered my own domain and got immediately graded as "Fatal error". Turns out, although most things were alright, I committed the mortal sin of not having an HTTPS redirect on port 80.

(The check-tool also scolded me for wasting my visitors' bandwidth by using inline <style> here and there, and by not doing GZip. Sorry.)

Protocols of a bygone era. Hesiod

You probably know that Linux can look up user accounts from the traditional /etc/passwd file, or from Sun's NIS/YP system, or from an LDAP directory server (if you install additional software). But did you know the Linux glibc also comes with built-in support for retrieving user account information in DNS?

Here's an article about Hesiod, a mostly forgotten service that's still present on every Linux glibc system to this day.