index » journal

grawity's journal

from behind the event horizon

everything has an index of all posts. Here's the latest one.

Encr(y|a)pted pastebins

Some pastebins, such as ZeroBin or 0bin, use client-side encryption for everything stored in them (usually as a way to avoid liabilities). As the encryption is always implemented in JavaScript, it becomes quite annoying when one wants to download a "raw" version of pastebinned text (e.g. a piece of code or a digitally-signed message), or simply to view the paste's contents in situations where graphical web browsers are unavailable – since neither curl nor wget nor any terminal-based browsers (even elinks) will be able to decrypt the text.

To fix this, yesterday I added support for both 0bin and ZeroBin to my getpaste tool. Both pastebins use the same method of encrypting the text with SJCL, serving the encrypted JSON blob, and putting the randomly-generated password in the URI fragment.

I had to do some digging in the source code, as SJCL's documentation of default algorithms isn't terribly clear (sjcl.encrypt() defaults to AES128-CCM with PBKDF2-SHA256 at 1000 rounds, and the 8-byte CCM authentication tag is added to the end of the ciphertext), ZeroBin compresses the text with raw DEFLATE before encryption, and 0bin uses LZW…except when it doesn't. But now getpaste knows how to dump the raw text when given URLs from both websites, as long as they're complete with the "fragment" (…#foo) part.

The tool is available in my "code" repository at net/getpaste.

Some writings about IRC

Wrote two short article-like-thingies about some IRC features, to explain strange numeric nicknames after netsplits, and why DH-BLOWFISH isn't perfect as some people argue it to be.

These go along with the explanation of DCC SEND startkeylogger written , in that all three were prompted by attacks on the freenode IRC network.

Linuxism

Just came upon this article on the linux-nfs list, posted by (apparently) the Linux NFS maintainer:

_portable_ applications do not use xattrs. They are a Linuxism that is not described by either POSIX or any other similar standard.

Uh

Extended attributes have been implemented by NetBSD, Mac OS X, IRIX, Solaris, OpenBSD, FreeBSD, BeOS/Haiku, Windows NT, even OS/2. How in the hell are they a "Linuxism".

(Also – odd, considering that the same maintainer seems to have liked the xattr patches for NFSv3 six years ago. Maybe I'm missing something.)

Untitled – on my bash prompt

Warning: One of those really boring posts in which I brag about my epic hax. (But well, that's the point of this whole site, isn't it?)

Like many Linux users, I waste a lot of time customizing the hell out of my terminal's appearance. Part of this is creating an awesome shell prompt. Everyone likes to put all sorts of information there – whether you're in a Git or Hg repository, what branch you're on, what the exit status of last command was (sometimes even expressed in the form of elaborate Unicode emoticons)...

Mine is plain in comparison – it just shows the hostname, path, and branch. So far it looked pretty much like this:

rain ~/pkg/abs/telepathy-mission-control/git/src master
$ foo

Over time I ended up implementing various unusual things in it, however – for example, highlighting the last directory component, or collapsing the path when it becomes too wide for the terminal:

rain ~/pkg/abs/telepathy-mission-control/git/src/telepathy-mission-control master
$ cd tests/twisted/tools

rain ~/…sion-control/git/src/telepathy-mission-control/tests/twisted/tools master
$

It generally shows enough of the path to remember where I am, although I'm probably going to adjust it a little bit yet. Today I also changed the highlight to always start at the repository root, which makes things much clearer when dealing with nested repositories – it looks a bit ugly however:

rain ~/pkg/abs master
$ cd telepathy-mission-control/git

rain ~/pkg/abs/telepathy-mission-control/git master
$ cd src/telepathy-mission-control

rain ~/pkg/abs/telepathy-mission-control/git/src/telepathy-mission-control master
$

The collapsing is implemented in roughly 50 lines of bash.

irssi ≥ ≨ ≱ ≲ ⋙ … ⋚ *

I've used Irssi as my main IRC client for almost 5 years, before switching to Weechat. Despite being pretty much unmaintained (and lacking some features), Irssi is still a good client, but… it has a problem: the users.

Specifically, the users who always feel the need to declare that Irssi is better, that "irssi > *", that Irssi is perfection.

Most users of other IRC clients openly admit that there's some misfeature or something else that they don't like. For example, the way Weechat works, it must wrap overly long URLs into multiple lines, making them unclickable. Meanwhile, Irssi users (at least the vocal ones) insist that their chosen client is perfect, and if it doesn't have a feature, then it is only because said feature is a) "you don't want it" (obviously unnecessary), b) "why would anyone want it" (obviously stupid), or c) "just install a script :)" (can be implemented using the exposed API).

For example, the nick list. Most clients let you have a sidebar that lists all people currently in the channel, usually sorted by rank. Now, I don't care if it's useful or just clutter for you; that's not my point. My point is that Irssi users always say: "Oh, if I ever wanted a nicklist, I could just install nicklist.pl and have it."

What is always left unsaid is that Irssi does not actually have any API for creating vertical regions, so the script works only if you open a new terminal window running cat ~/.irssi/nicklist-fifo. Alternatively, if you happen to be using SCREEN, the script actually reconfigures Irssi's tty to be narrower than the SCREEN window, and draw directly in the blank space that appears...every single time Irssi's own area is updated. In contrast, even though Weechat has several such scripts (although nicklist is built-in) they do not have to do anything special; they simply create a "bar" and put text inside. (There is no difference between the built-in "nicklist" bar and the scripted "buffers" bar, as far as the user is concerned.)

And there are more examples like that – for example, the cap_sasl.pl script in Irssi doesn't just implement the SASL cap, it has to implement all of capability negotiation on its own, and you cannot write your own scripts that make use of other capabilities unless you change cap_sasl to request them. (Although I have an idea on how this could be done, if the CAP negotiation was split into a second script.)

Somewhat similar to the nicklist example is implementing the server-time capability, which lets bouncers attach the original message timestamp when you connect and see the last messages being replayed. Yes, it is possible to do that from an Irssi script. But again, the only way it can be done is a hack upon a hack:

Is that a good example of "flexible API"? Not very. But again, it's not really the client itself that's the problem – all clients have all sorts of limitations (like Weechat lacking any sort of hook for defining custom SASL mechanisms) – but rather the users who basically worship it, refusing to admit any imperfections.

ICMP, IPsec, IRC, and other random notes

Recent versions of Linux translate incoming ICMPv6 "Administratively prohibited" errors (type 1 code 1) to local -EACCES ("Permission denied") errno's, which is an interesting way of being informed that the server's firewall is blocking you. Unfortunately, all other operating systems (Windows, older Linuxes, various BSDs) appear to just ignore these ICMP packets, which is a bit sad – I expected them to at least terminate the TCP connection attempt with something generic like "Connection reset by peer", but instead they just wait until the connection times out.

Then again, the other OSes often do the same even for ICMP "Port unreachable". Also sad. Also strange that even on Linux, only ICMPv6 uses this translation – the equivalent ICMPv4 "Communication administratively prohibited" (type 3 code 13) results in -EHOSTUNREACH, "No route to host".

Still, I really like the whole translating remote failures to local errno's thing. Somehow it actually makes me feel as if I'm using a network where everything is integrated and where I'm receiving feedback from the network, instead of just a bunch of computers exchanging data.

Similarly, the ping tool on Windows displays the message "Negotiating IP Security" whenever Windows is performing IPsec key exchange, which is a nice touch – when the same is happening in Linux, the packets just go nowhere. (I don't remember offhand if they're queued or discarded; either way, there's just no feedback.)

C:\>ping 10.42.0.1

Pinging 10.42.0.1 with 32 bytes of data:

Negotiating IP Security.
Negotiating IP Security.
Reply from 10.42.0.1: bytes=32 time=21ms TTL=128
...

(On a related note, IPsec with strongSwan is hella confusing at times.)


Spent the majority of the past year on IRC. Somehow I ended up being an operator in #archlinux, then in freenode's #irchelp, finally even in #systemd. Yes, #systemd was finally registered with network services after three years – for a project like this it's really surprising that the channel hasn't been attacked or invaded by trolls even once.

Kind of wondering why I now have +v in #inspircd, too. Given that I've only used InspIRCd for a hour or two, and I mostly just lurk in the channel... But I'm not complaining.


Messing around with Windows on the desktop PC while sister's out somewhere. (I never got around to installing the TermiServ patch since the reinstall last month, so it only allows one user at a time.) It seems that the smaller disk is about to die sometime this year – SMART just started showing a large number of reallocations and failed writes. Which is a bit unexpected, because the disk hasn't been used for almost anything since the reinstall; it only has a tiny boot partition with NTLDR on it. (For some reason, NTLDR refuses to work at all when started from the larger disk – maybe 1 TB is too much for it?)

On the other hand, I did know that it wasn't going to live long – the Event Log started showing "controller errors" in 2010, and I moved all user files to the new disk in early 2012, so when the data corruption started occurring, I only had to reinstall the OS...and, well, everything else.

There was a time when I tried setting up backups on the desktop, but it was the same story again. WinRAR actually has several useful features – storing multiple versions, NTFS streams, file permissions, &c. – but it also turned out to be much slower than expected, and it could not deal with encrypted or locked files at all. RoboCopy was roughly the same, although much faster.

I even ended up writing my own tool in C#, which would just copy a directory tree but also worked with locked/in-use files (using temporary Shadow Copy snapshots, which XP happens to ... kind of "support"), insufficient permissions (using SeBackupPrivilege to bypass the checks), and even encrypted files (using EFS APIs to read the raw contents, without Windows trying to transparently decrypt them). But it was in C#, and the .NET runtime actually took way too long to even start. So in the end, I still have no real backups of the desktop PC, only a snapshot of F:\Users from before the reinstall.

Backup troubles

So I've spent the past week trying to find a good backup program. I still haven't found one.

It could be that my requirements are impossible. I want a tool that would be reasonably fast both when copying data, and when adding a lot of small files; have some form of deduplication to avoid wasting gigabytes of data after I simply move files around; and not require a command-line tool to actually access the backups. But apparently no tool can do all three at once.

I tried rsnapshot (which seems to be just a wrapper around rsync --link-dest), as well as plain rsync combined with btrfs snapshots. While rsync is fast enough, it turns out it is too dumb to detect moves and renames, so if I simply rename ~/Videos/anime to ~/Videos/Anime, or if I move a dozen of CD images from ~/TODO to ~/Attic/Software/OS/WinNT, rsync thinks all files are new and spends ages copying them again, instead of hardlinking from previous snapshot as the --link-dest option normally would. (I'd be happy to know if I'm wrong on this one and if it can actually detect renames.) Plus, copying to a btrfs partition is much slower than expected; only 15 MB/s instead of the usual 25-30 MB/s (that's over USB 2.0).

I also used obnam for quite a while. It's fast and it has deduplication built in so I can easily keep a few dozen weekly snapshots. But I'm not exactly a fan of having to use obnam restore whenever I want my files out. While that's a rather minor problem, and there apparently is a FUSE plugin in the works, there's also the risk that obnam's repository will get corrupted and won't let me access anything anymore. I'm also not exactly a fan of obnam growing to 1.5 GB of memory during its run – and that wasn't even the entire run, that was maybe 1/3 when I finally killed it. (I do hope that's a bug.) Also, while adding data is fast, obnam is slow when adding files – and if I have a directory with 200k smallish files in it, it goes at maybe 6-10 files per second, which means it takes hours to copy a mere 5.2 GB of Gale chatlogs.

Next option is ZFS with dedup enabled – either rsnapshot or plain rsync with ZFS snapshots would work. The problem with it, however, is that it's a pain in the ass to maintain on Arch. Every time I install a new kernel version from [testing], there are four packages I need to rebuild, and since they all have versioned dependencies (e.g. zfs 0.6.2_3.10.9-1 depends exactly on linux=3.10.9-1 and zfs-utils=0.6.2_3.10.9-1) it means I must remove all ZFS tools entirely, then upgrade my kernel, then start rebuilding ZFS. (Of course, I just wrote a shellscript that sed's the versions out of depends= lines, but that doesn't make it any less of a pain in the ass.)

For now, I guess, I'll just stick with rsnapshot and limit it to four, maybe three, snapshots at once... but fuck, how can there not be a backup tool that doesn't suck in some way or other?

daily crontab

A few years ago I wrote a cronjob for updating ~/.ssh/authorized_keys on various servers. (It ended up having the name update-authorized-keys after a few renames.) It basically downloaded my authorized_keys file over HTTP (using one of a dozen HTTP clients to be extra portable), checked if it had my PGP signature on it, and supported some cpp-ish filtering. I was extra careful to look for a specific PGP key by fingerprint and all that.

And several months later, I wrote another cronjob – this time for updating my script collection and my dotfiles – called dist/pull this time. It first updated ~/code over Git, then exec'd the updated version of itself (just in case), which then updated ~/lib/dotfiles (also over Git). Sometimes I would patch dist/pull to do various cleanup jobs, and they would always run at midnight automatically. (As a bonus, it also ran the SSH key updater, instead of having two separate cronjobs.)

And I just realized that despite all my carefulness, I still ended up having an easily pwnable cronjob that automatically downloads and runs code every night without verification. Crap.

SASL authentication in Eggdrop

Many IRC networks now support SASL as the standard authentication method, which removes certain race conditions such as having your client auto-join channels before auth is complete – as a result your vhost/cloak would get applied too late, you might be denied entirely if the channel requires being authenticated, etc.

One day, out of boredom, I wrote a mostly-pure-Tcl implementation of IRCv3 CAP and SASL for the Eggdrop IRC bot. At the moment, it is located on GitHub Gist, and consists of three Tcl scripts – Base64; CAP negotiation and SASL PLAIN; plus a demo script for several other IRCv3 capabilities.

Saying "mostly-pure-Tcl" because the CAP negotiation still needs a one-line patch to the core code. However, two days ago, the "preinit-server" patch was merged into the main Eggdrop 1.8 repository, so it can be used with the scripts without any modification.

Quiet ones

Yet another day, another news report, yet another guy murders his – yeah, almost always his – entire family, class, all coworkers, etc. Or themselves, for that matter. And always unexpected, so unlike his normal behavior. They always say things like, "oh, he was such a nice guy, always quiet, always nice and so shy, how could he do such a thing?"

Does anyone consider for a moment why the guy was so quiet, so…?

Anyone ever think for a moment that perhaps he wasn't just born such, but taught, raised to be such? Ever wondered where he got enough anger to go on a killing spree? Might it have been all the anger that he had collected over the past twenty, thirty years, and never allowed to show it until it burst?

Do you remember the last time when your friend was angry at something and you just said "will you calm down, for fuck's sake"? Now imagine yourself in their place for a moment. Imagine being angry and being told to calm down. Where does that anger go, if not outside?

Somewhere deep inside, I guess. And imagine that every time it happens, your friends, your family – parents – they get angry at you because you dared becoming angry at them, such an unspeakable thing. Imagine that you can't even look annoyed because it's like being someone's parents automatically makes them infallible.

Not much more to do but stop being angry. Take a deep breath, then walk away. Decide that it's not worth it. And every time you just push the anger somewhere deep inside you; be a nice guy to everyone no matter how they wronged you, be a quiet guy no matter how everyone shouts at you.

But at some point it just doesn't fit there anymore. Maybe it starts eating you from inside – you become angry but at yourself, for being such a nice guy, such a quiet guy, such a loser. Or maybe it just explodes, maybe you just want someone to listen to you, anyone, to even acknowledge you...

It's hard being the quiet one.

Random notes: Setting up my virtual machine network

I'm still trying to set up a sane virtual machine network, one that would put VMs on both the laptop and the desktop in their own networks routed to each other and to the real LAN and to still let my own VMs access "LAN subnets only" services on the desktop, like the file sharing.

It's not going well – I ended up running Unbound, BIND, and dnsmasq on the same laptop: Unbound I already had running before as my validating resolver; dnsmasq serves DHCP to the VM network and hosts a simple dynamic-DNS LAN domain for accessing random PCs; BIND hosts a static domain for accessing the two Active Directory realms installed in two VMs, because dnsmasq's static DNS settings are plain stupid. So now I have all my VMs nice and clean in their own net, routed to the real LAN – that is, routed and NATed so that LAN hosts would see the VM's real addresses but the LAN router/gateway/cheap-ass-DSL-modem could still do its own NAT thing properly but the desktop also needs to see the NATed addresses when VMs try to access shared files, so that the firewall would let them through... I might have written the stupidest NAT rules ever just to make this work:

-A POSTROUTING -s 10.7.0.0/16 -d 192.168.0.0/16 -p tcp --dport 445 -j MASQUERADE
-A POSTROUTING -s 10.7.0.0/16 -d 192.168.0.0/16 -p tcp --dport 139 -j MASQUERADE
-A POSTROUTING -s 10.7.0.0/16 -d 192.168.0.0/16 -j ACCEPT
-A POSTROUTING -s 10.7.0.0/16 -o wlan0 -j MASQUERADE

This whole mess turned out to be needed because my ISP configures its routerdems to have a "management" network in addition to "user" and "Internet", and that network happens to be using 10.0.0.0/8 (already confused the hell out of me once, when I wanted to connect to a VPN but the traceroute to 10.0.x.x addresses kept going through my ISP) which makes the routerdem think the packets from my VMs aren't actually coming from inside the LAN, so it refuses to apply NAT to them, so my laptop (the VM host) has to NAT all of them to the LAN address range... On the other hand, I still want all VMs to be reachable from the real LAN using their own IP addresses, hence the ACCEPT rule.

Aside: The Spooler service in Windows XP is rather picky about the hostname you use to access it. Apparently, the full UNC path of the printer is sent when conneting to it, so if you're trying to connect to \\snow.virt\FooPrinter but the server thinks it's \\snow.home (not \\snow.virt), it will return "Invalid printer name" to the OpenPrinter request – despite it having already accepted a SMB connection to \\snow.virt\IPC$ without even a blink.

wmii, i3, and IPC protocols

I used to use the wmii tiler for a long time (before going back to GNOME), and recently it seems i3 has become popular, so I decided to try it out. I'm not going to comment on the usability, features, etc. – but I sometimes have really odd criteria for choosing software, so here's one such odd comment.

When I used wmii, it had a really sweet control interface, styled after various plan9 software: the configuration file was essentially a bash script (later ported to various other languages) that had its own event loop. The control interface was 9P over a Unix socket: read /event, write to /ctl, list files under /tags, and so on. You could even mount it as a local filesystem, using native 9p.ko.

(Later I went back to GNOME. The Shell isn't scriptable externally (at least not easily), but overall, almost all programs I run use DBus in some way or other. It's also somewhat nice and consistent.)

Then I tried i3, which claimed to be heavily inspired by wmii – and at least the appearance and the control keys were quite similar. (Although wmii has a simpler layout model – it always splits the screen into columns, similar to acme.) But I was somewhat disappointed at i3's IPC protocol – even though I have zero experience in designing such things, it still looks ugly to me.

There's a "command" message type, and six "get_foo" message types. There's "check the highest bit to see if it's an event reply or normal reply. There are no event names – there's a list of magic number definitions in i3/ipc.h which has to be copied into your i3ipc implementation; this is not a problem by itself, of course, but only when the definitions are assumed to be stable – which, in this case, they aren't.

So that's my impression of i3.

Untitled – on .plan files

Today while discussing home directory permissions and the 'finger' command, I mentioned the long list of users at @linerva.mit.edu. Someone quickly discovered a user or two having their contact information in .plan files, and the general reaction was:

<+woddf2> DarkFox: User "marc" doxed himself. o_O
<+woddf2> DarkFox: "amu" also doxed himself.
<+woddf2> DarkFox: Apparently many users ha(d|ve) the habit of putting their doc in ~/.plan.
<+woddf2> DarkFox: Dox == documents (e.g., real name, home address, telephone number)

While I've never been at MIT, "public" seems to be the default there – not only that user's contact info, but their entire home directory is world-accessible over AFS. I often do the same; I consider contact information to be more-or-less public, as it has been in the past. So it is quite unusual for me to see other people finding a random user's phone number, and reacting as if it were a precious gem. Even calling it "doxing" just doesn't seem to fit here.

On chat networks

Today I added my Yahoo IM account to Pidgin, just to see if it still works. It did – and as soon as it connected, I got 10 messages from ten different spambots (apparently YMSG stores offline messages). Windows XP has this feature where you can Ctrl+click on taskbar buttons to select multiple windows, the same way you would select multiple files, and then close them at once (or tile/cascade the selected windows). It's something GNOME 3 still lacks.

I did this after Microsoft decided to kind-of shut down their MSN Messenger servers, to make more space for Skype. The standard servers are already refusing raw MSNP connections, although Pidgin can still connect using its "HTTP method". I'm somewhat amazed that even on various Linux geek channels on IRC, people are saying things along the lines of "good riddance", not realizing that Micros~1 is shutting down a sufficiently reverse-engineered IM protocol in favor of a secret one that requires a tightly locked down client. There are at least a dozen unofficial MSNP clients for both Windows and Linux. Hell, MSN Messenger had official XMPP servers. Meanwhile, who still remembers how the attempts to reverse-engineer Skype went? Not well.

Oh well. Maybe things will get better when Microsoft tries to integrate Skype into its build system and forgets to enable obfuscation, or writes a HTML5 client, or something. Meanwhile, Yahoo! Messenger is still online, as are ICQ and AOL Instant Messager. I still remember my UIN, it seems. (And I've never had more than three contacts total over all four protocols, but that's off-topic.)


Recently I found another IM protocol, Gale, which feels somehow like a cross between XMPP, Zephyr and IRC.

In other words, Gale takes the best parts of all three, while keeping a very simple interface (and one much more scriptable than, say, XMPP). Similar to Zephyr, there's no full-blown client by default, only separate command-line tools for subscribing and for posting a message. You can compose a message in Vim and send it with :w !gsend pub.

rain ~/src/gale master
$ gsub -e test@nullroute.eu.org
! 2013-01-18 17:51:37 gsub notice: skipping default subscriptions
! 2013-01-18 17:51:37 gsub notice: subscription: "test@nullroute.eu.org"
! 2013-01-18 17:51:38 gsub notice: connected to decay.nullroute.eu.org
(at this point, gsub simply forks to background)

rain ~/src/gale master
$ echo This is a test. | gsend test@nullroute.eu.org
--------------------------------------------------------------------------------
To: test@nullroute.eu.org
This is a test.
           -- grawity@nullroute.eu.org (Mantas Mikulėnas) 2013-01-18 17:51:45 --

Unlike IRC, it's possible to subscribe to the same address from many locations; join/part notifications do not exist; there's no way to know who's reading messages to a public address. The gsub client does support sending special "presence state" messages, but those are merely informative, not persistent. Addresses can be hierarchical – one could subscribe to pub@example.com or only to pub.tv.fox@example.com.

There's a downside, too. Gale messages can be encrypted, and to authenticate senders & receivers everyone has a RSA keypair, which are verified hierarchically – the "ROOT" key signs TLD keys, the TLD keys sign domain keys, domain keys sign user and/or subdomain keys, user keys can sign subkeys. To set up a new domain, one needs to email their domain's public key to the root key's owner and receive a signed key back. So far, signing has been done all by the same person, Gale's creator Dan Egnor. There have been proposals for a notary, but nobody cares enough to finish them... Nevertheless, the scheme is better than Zephyr's Kerberos-based trust relationships, which simply do not scale above half a dozen realms.

Unfortunately, there are very few users of Gale by now. Maybe a dozen still post to pub@ofb.net to this day; most of them probably have migrated to IRC or XMPP or Skype. Overall, it feels as if Gale should have received a lot more attention than it has.

Update: Since the CVS server described in Gale's website is now defunct, I've obtained a copy of the entire repository and imported to Git – it's available at github.com/grawity/gale, with minor fixes such as better libc locale support.


The next post, if I ever get around to it, should be about IRC, Zephyr and PSYC.

Chaos.

Today, I tried accessing my laptop's files from the family desktop, running Windows XP. After typing the usual cd \\rain\grawity in Total Commander, I was greeted with a password prompt... which did not accept any of my usual passwords, for neither rain\Mantas nor rain\grawity.

At first I thought I screwed up my Samba's usermapping script, or that I forgot to configure Windows to use NTLMv2 (after it was reinstalled), but the configuration was right and curiously the usermapping script didn't seem to be executed at all. So I tried to take a look at the raw SMB traffic with Wireshark, and after filtering for smb I was greeted with a blank screen. Odd.

After expanding the filter to smb or netbios, I noticed that the desktop was sending NetBIOS name queries for RAIN, but wasn't receiving any responses... (I had forgotten to restart nmbd.service after killing a bit too many processes on the laptop.) Since the NetBIOS name query failed, Windows would fall back to good ol' DNS and look up rain.nullroute.eu.org – which had no IPv4 addresses, only an IPv6 one.

Since there was no IPv4 address, Windows skipped the LanmanWorkstation network provider entirely – it does not have IPv6 support in XP – and try the next configured one. Since the second provider is WebClient, which implements WebDAV, Windows started poking around on the laptop's webserver. It completely ignored the lack of PROPFIND in the OPTIONS response, sent a PROPFIND request anyway, then interpreted "405 Method Not Allowed" to mean "access denied" rather than "I don't speak WebDAV".


This little problem reminded me that I still do not have proper hostname resolution set up on my LAN. On various occassions it's relying on NetBIOS (sucks), Bonjour (yet another daemon), global DNS (can't put local IPv4 addresses there), and router-provided *.home DNS (router forgets hostnames, adds new-host-1.home and other stupid entries). Sometimes even /etc/hosts (ugh, manual updates). Ironically, of all those, NetBIOS has been the most reliable one so far. (Maybe I should just stop worrying about its inefficiency? The LAN is really quite small anyway.)


On the topic of consistency, I still haven't started doing consistent backups. On the laptop it's easy – just connect the external HD and run obnam ~ every now and then. On the desktop, it's harder, as 1) it runs Windows, 2) it has severely limited CPU resources, 3) it's inconvenient to carry the external HD there.

The largest problem might be #1, it being Windows there is no quick and easy equivalent to tar – and I do want a good backup tool, not a lame Cygwin port. In particular, I need it to back up files currently in use (ignoring share flags needs filesystem snapshots), files which the "backup" tool's account cannot access (requires modifying the process security token & calling low-level CreateFile() function with FILE_FLAG_BACKUP_SEMANTICS), and even EFS-encrypted files (requires an altogether different API to access).

I have, in fact, written a backup tool that does most of the above – even "integrating" it with Volume Shadow Copies despite the fact that Windows XP doesn't allow persistent VSS snapshots, and the API for making temporary snapshots is quite undocumented – but unfortunately it is in C#.NET, which conflicts with #2 "must be extremely light on CPU" (the desktop has a second-hand CPU that had been overheated many times).

As for #3, my latest plan is to have the tool create local snapshots every day (to help with recovering accidentally deleted files), and move old snapshots to external storage (maybe Obnam again?). When writing vssbackup, I had only Notepad2, the csc compiler (which came as part of .NET runtime), and online MSDN docs. Now I have Visual Studio installed, so maybe I should try porting it to unmanaged C++, get rid of some unnecessary parts...

Nah. Who needs backups?


Well, my sister did need backups just today, after accidentally permanent-deleting some files. One of them was an e-book which I thought I had three copies of; but I could find none of them – and I had to search about four separate directories all named "Library". (Then I remembered I had the e-book in my website's /mirrors directory. Whew.) The other, no such luck; it was a personal document that disappeared in the so common case of "ah, it's only a copy, I can delete this" followed by "ah, but it was the last copy" – which also took over a week to write.

Yes, it turns out I have four "Libraries", three software archives (both a complete mess), two "TODO" directories, and one huge "Downloads" dump. Every time I try to organize all that stuff, I just end up with one more half-disorganized directory. Sigh.

Looking back

New posts here are rare. I'm not much of a writer – more of a tweeter – and whenever I touch this website, it's mostly just to fix some tags or to try out minor CSS adjustments. The oldest entry in this website's Git history is "Initial commit, discard history". One year later, this looks like the most phenomenally stupid thing I've ever done to my data, and I couldn't find any backups either... Once, I considered importing snapshots from rootshell.be times from Archive.org, but they were horribly incomplete – and of course lack the PHP sources, however minimal.

At one time, I got tired of repeating the same <html> <head> and wrote a PHP thingy that just parsed Jekyll/Liquid-style headers. Other than that, the website never grew much beyond a pile of static, hand-written HTML files. (I'm a cheapskate and mostly use "shell" servers my Internet friends have paid for, so being able to quickly move the website to a different server became useful more than once.) There have been several attempts at making this page auto-generated; first using Jekyll (didn't last long), then a custom Perl script using Text::Template, which I later ported to Ruby and erb. (The ERB script made it easy to add an Atom feed, which meant I couldn't call the blog "Not a blog" anymore.)

Lately, I've been reading about Semantic Web again, so almost half of the commits in this website's truncated history are updates to my Webfinger profile, FOAF card and other miscellaneous junk. I'm not sure anyone still takes those things seriously – although Google sort-of did, with its Social Graph API – but nowadays, probably the only similar thing is Facebook's "Open Graph" API, which handles the same information as FOAF but limited to Facebook members.

But all this is pointless if the website doesn't contain anything useful to others. So far, I only have three serious articles – one about the "startkeylogger" bug, another about Bluetooth and dual-booting, and a quite outdated page about GNOME Keyring. They do get a visitor every now and then; even from such odd places as 2ch.net (if Google Webmaster Tools is telling the truth). The posts in this blog/journal/thingy consist mostly of stories, rants, and some horribly outdated ConsoleKit troubleshooting tips. I never get around to cleaning them up, though. Most of my "useful" writings are in the form of Super User answers.

Answering computer questions is how I've spent most of my free time – first on various local forums, later on Usenet and mailing lists; currently on IRC and Super User. I've lost count of messages I wrote, questions answered, and problems solved. Some even say "thanks". I learned a lot myself, too – how to set up multi-master mirroring for Cluenet's OpenLDAP servers, how to store a Kerberos database in LDAP, how to set up NFSv4 with Kerberos, and so on.

After Cobi and Crispy – the network's founders – left Cluenet for the real world, I also had to learn how to untangle messy PHP code and messy rules & policies. The website is still ugly PHP + ugly Smarty. (I already removed much of the old cruft; three identical installations of MediaWiki; various code for a "redundant cluster", which only ever had one server.) The mail still doesn't work; the domain was configured for Cobi's Google Apps because running Postfix was "too much trouble" or something.

The signup process is still horrible – much of the bureaucracy is so unnecessary for such a small network, it scared away quite a few new users. New people never stay for long. Maybe one day it'll be better.

I haven't had much of a taste of the "real world" myself. I somehow managed to get into a local college, and even survived the first year. The second year is going well, so far – but I'm much too lazy to actually do any assignments, too much of a geek to go out with friends, too much of a loner to have friends. The various mood disorders I'm having do not help either. In a way, they're the cause and effect at the same time.

I still keep listening to music and hacking away at Linux command line, but even that has become a bit boring. I've taken to reading various stories on fanfiction.net and other similar sites, or just daydreaming – creating my own fiction. Reading about the lives of others lets me forget about having no life of my own; it's an escape.

I still keep on living. Maybe one day it'll be better.

Why D-Bus is awesome

D-Bus is a relatively recent IPC system, replacing DCOP, Bonobo, and various hacked-together Unix socket protocols. It is still considered by some to be "bloat", but has quickly gained popularity nevertheless – mostly among GUI applications, though, but also in use by various system components; for example, both Upstart and systemd init systems are controlled over D-Bus.

One of the most useful applications of D-Bus is the MPRIS specification, which has no equivalents in other environments so far. Originally written by the VLC team, now part of Freedesktop.org, MPRIS is an interface specification for controlling media players over D-Bus. It defines the common "play", "pause", "seek" and other such commands, allowing any compatible player to be controlled with a single program – and there are many compatible players; one list counts as many as twenty, with MPRIS support being a common wishlist item for the others.

Screenshot – GNOME 3 Media Player Indicator
GNOME 3 "Media Player Indicator" extension

All of this means that instead of having twenty different commands to skip to the next song, or twenty different desktop applets, there only needs to be one (for example, the popular extension for GNOME 3 or my command-line utility), reducing the time needed to add support for the Awesome New Player of the Week.

IM clients are able to easily set the user's status to the currently playing song by any player implementing MPRIS, and even Ubuntu for their Sound Menu have switched to just being a MPRIS client in Natty, instead of Ubuntu's custom "Ayatana" protocol in earlier releases.

Unfortunately, not all players support MPRIS v2 yet – some only do the older and clunky v1, others are stuck with custom D-Bus interfaces or different IPC systems. (For example, mpd requires a bridge client that acts as a MPRIS service, although this is understandable for a player aiming to be network-transparent.) Buggy and/or incomplete implementations are also common; I spent some time recently fixing BeatBox and the Exaile plugin.

MPRIS itself is still somewhat minimal in an attempt to be easy-to-implement with any media player, so it still lacks such niceties as playlist management (although a play queue is supported) or changing of song ratings. For this, many apps still implement their custom D-Bus interfaces next to MPRIS.

These problems become relatively minor, however, once one realizes that there is no alternative at all – on Windows, only few such programs can be controlled using documented interfaces, while most of the time it is reduced to sending fake keypresses and button clicks with AutoHotKey, or even reading title-bars just to determine the current song, as I have noticed done by a Pidgin plugin to interface with foobar2000. Winamp was possibly one of the first to have good IPC support (based on Win32 messages) as part of its extensibility, but it might as well be the only one.

Screenshot – D-Feet introspection tool
D-Feet introspection tool

While remembering Winamp, something certainly could be written about the introspection feature of D-Bus – being just another interface with only a single method, it allows browsing supported D-Bus methods or properties in almost any D-Bus service. (The bindings for Perl and Python automatically implement the "Introspectable" interface, but even in C it is done almost always.) It is much easier to experiment with a system browsable in such tools as D-Feet than with one that requires reading a long list of numerical messages and their meanings (as with the Winamp WM_COMMANDS).

Gmane – how all mailing lists should work

Recently I discovered Gmane, a mailing-lists to NNTP gateway. It turned out to be the solution to several of the biggest annoyances I've had with mailing lists so far.

Gmane allows all mailing lists to be accessed using a standard protocol, NNTP, which provides a consistent interface instead of having to deal with five different web-based management and archive sites (Mailman, Pipermail, MHonArc, etc., etc.). I can tell my newsreader to kill uninteresting threads and highlight others, and even post replies to the "newsgroup".

Over NNTP, it's also easy to access archived messages, even those sent before subscribing to the list, in their original RFC*822 format instead of pipermail's heavily-filtered HTML archive. With my newsreader (Thunderbird), I can even make copies by dragging & dropping interesting messages to an IMAP folder, with their original headers and everything. Very few (in fact, close to none) web-based list archives offer "raw" or "mbox" versions.

The only downside is that Gmane mangles email addresses in the majority of lists, causing PGP signatures to be broken. But I suppose that's the cost of having publicly accessible a giant archive of email messages.

Massive internet connection weirdness

SSL is flaky. Attempting to connect to twitter.com returns "ssl_error_rx_record_too_long". Other sites give occassional "ssl_error_bad_mac_read".

Kerberos is flaky; two out of three realms are rejecting my password – although AS-REP and TGS-REP's are returned, all I get is "Decrypt integrity check failed".

SSH is flaky; pubkey auth gives "Corrupted MAC on input." – strangely, even over Tunnelbroker.

DRM continued

There has been some discussion about a recently released e-textbook, due to it being distributed with a Windows-only DRM layer despite lacking copyrighted content. I've no need for this textbook, but the descriptions of the DRM seemed familiar, so I decided to take a look – for educational purposes, of course. It turned out to be the same tux_XFS DRM I've described earlier, with the same annoyances (no copy, no printing, ancient Adobe Reader; breaks clipboard for the entire session while open), but with no apparent protection of any kind – distributed publicly over the Internet, and no serial number.

The publisher's website did have a form asking for my name and email, though, but the download did not have any identifying information (unlike some shareware programs I've seen, which would embed the user's name into the installer). However, the ebook's installer silently runs the tux_XFS online activator.

Overall, the protection looked the same as in previous ebooks – except this time, the .exe swap trick didn't work; the new .exe just wouldn't load XFS.dll. It could be that there is some secret handshake to be done with the launcher in this version. So I went for a different approach.

The overlay still used the same container format and the same method of encrypting Reader's temporary files; I successfully decrypted the latter with the key I found from the previous ebook. However, Reader keeps an exclusive lock on the files while running, and deletes them afterwards, so this is not a very convenient method, as well as still requiring Windows.

Finally I attacked the XFS container files directly. The format of 001.dat turned out to be rather simple, and even though I still don't know the purpose of some data ranges (especially in the container header), most of them correspond neatly to various Windows file APIs (filename, DOS attributes, timestamp) or to metadata about the container item (start position, raw size). Soon I had written a cross-platform script for extracting and decrypting these containers. It does not handle the "no copy"/"no print" bits in the .pdf files, though, and I'm still searching for a reliable tool to remove those.

Related note: The container files store files in 64 kB blocks, padded with null bytes. At the end of each file, there will be several kB worth of null bytes, XORed with a 256-byte key. Hello known-plaintext attack...

Reprogramming USB drives

The prestigio USB pendrive my mother had suddenly started throwing out various I/O errors when adding or removing files; it would refuse to update the first few blocks where the FAT lies, becoming more-or-less useless. An obvious solution was to buy a second identical pendrive and copy all data to it.

...however, the new drive turned out to be "enhanced" (read: "fucked up") with a second read-only partition of some sorts, with 500 MB reserved but only a tenth of it actually used (by an outdated AVG Free and a dozen fucking JPEGs). "Hey, how about we waste 6% of the drives we sell for absolutely no reason?" In addition, the disk was not divided using MBR partitions but instead appeared as two distinct LUNs.

Browsing the internets, I found a tool for re-programming UT165 flash chips, which prestigio pendrives are built upon. I was able to merge both LUNs of the new drive – and not only that, but I could also low-level reformat the old drive, skipping the bad blocks (which left me with a perfectly working 3.3 GB drive out of 4 GB).

Async Kerberos logins

My computers have Kerberos set up, which is practically useless (the "machines" are only one, not counting Cluenet boxen) but still somewhat cool. Using pam_krb5 to obtain Kerberos tickets on login, however, can result in really slow logins when the connection is unreliable. Since the accounts are primarily kept locally (/etc/passwd), I have switched to pam_exec running a background script that obtains tickets using PKINIT (since apparently I cannot pipe passwords to kinit).

Note to self: PKINIT requires krb5-pkinit to be installed on the server. As obvious as it may look, I already forgot it twice, being used to Arch's "everything in one package" philosophy.

In which I incriminate myself

I'm occassionally asked to crack proprietary DRMs of ebooks by an unnamed publisher. Although their ebooks themselves are just PDFs displayed in a packaged Adobe Reader 7, they must be opened through a "launcher" program, which attempts to prevent the book from being copied.

Some books were distributed on USB pendrives and simply checked Registry for a specific storage device ID – which could be bypassed by simply writing a launcher-launcher which adds the necessary values. I had to do this for one book since the PDF file communicated with the launcher in some way. (There was also some sort of "drive type = removable" check, where I went old-skool with w32dasm and hiew.) Other launchers were done away with entirely, keeping just a batch script to start the packaged Reader.

The last few releases were easier. Some books used online activation, others still checked hardware IDs; however, the PDF files were static, without dependencies on the launcher or the packaged version of Reader. Extracting them was easy – three ebooks had simple password protection, and the launcher would "type in" the password. They went down against Asterisk Logger. Other two were encrypted using simple XOR, but %TEMP% had the decrypted files for me to grab, and this allowed me to find the XOR key too.

The latest book was quite interesting: temp files were encrypted, and Process Explorer showed nonexistent executables running. As its own debug log revealed, a special DLL loaded into AcroRd32 would hook such Windows calls as ZwOpenFile, essentially setting up an overlay file system which contained the protected files and was only visible to AcroRd32. The trick was to make it run cmd.exe instead, and use that to copy files. (As it turned out, the overlay would also automagically decrypt Acr*.tmp with yet another XOR key. Figuring out what happens when you XOR-decrypt a series of null bytes is left as an exercise to the reader.)

An interesting thing to note: The filesystem overlay was also used for sending messages from the PDF to the launcher, by attempting to open nonexistent documents named #I, #Ofilename, and so on.

VirtualBox bridged network and WLAN

Bridging wlan0 is a pain. You normally cannot add it to a bridge interface (brctl returns "Operation not permitted"), and using VirtualBox "bridged" filter results in a big mess of ARP and DHCP conflicts. The cause of this is that 802.11 frames contain only three addresses by default: the MAC addresses of both wireless devices (laptop and AP) and of the final recipient (as in Ethernet). It is always assumed that there is only one possible originator.

802.11 can carry the fourth, originator's MAC address, and this is used in WDS mode by repeaters. This feature can be enabled on Linux too, using iw dev wlan0 set 4addr on, and enabling this mode will allow wlan0 to be used in bridge interfaces, as well as with VirtualBox bridged networking.

iw dev wlan0 set 4addr on

However, with 4addr enabled, you're likely to get completely ignored by the AP: association succeeds but all data frames disappear into the ether. This could be for security reasons (because it's damn hard to spoof the source MAC address. Yeah.) In my router (running OpenRG), it's necessary to enable "WDS" mode for the wireless AP interface, add a WDS device restricted to my laptop's MAC address, and add it to the LAN bridge. 4addr packets now work.

There's another problem with this, though – the router now rejects three-address packets from the laptop, which can be rather inconvenient (having to toggle 4addr every time the WLAN network is changed). The workaround is to add, on the laptop, a second wireless interface linked to the same device, but with a different MAC address:

# undo the earlier configuration
iw dev wlan0 set 4addr off
# add a second interface – the name was chosen arbitrarily
iw dev wlan0 interface add wds.wlan0 type managed 4addr on
ip link set dev wds.wlan0 addr $ADDR
ip link set dev wds.wlan0 up

Here $ADDR must match the WDS device address configured in the router; other than that, it can be any valid MAC address. The original MAC of wlan0 then remains for "normal" usage.

It's possible to use both wlan0 and wds.wlan0 at the same time – although I've only tested associating to the same AP twice, not to different APs. I'm guessing they would need to at least be on the same channel.

Kerberos on Windows XP

After joining Windows XP to an external Kerberos realm with ksetup /setrealm and then unjoining it, Windows completely loses the ability to log in as a Kerberos account. Instead of looking up a Kerberos KDC (registry configuration or _kerberos._udp.REALM SRV records), it attempts to find an Active Directory domain with the same name, by looking up _kerberos._tcp.dc._msdcs.REALM and attempting to make a CLDAP lookup on it for (&(&(DnsDomain=NULLROUTE.EU.ORG)(Host=HAILSTORM))(NtVer=0x20000006)).

Why? I have no idea, yet. Registry accesses by LSASS as shown by ProcMon remain the same.

After a realm join and unjoin using ksetup followed by a standard workgroup join and reboot, it started working. After second reboot, it stopped. Now it works again.

<fahadsadah> grawity: generally, don't attempt to make Windows do non-Windows things.

Disappearing AutoPlay items

Sometimes the AutoPlay action window in Windows XP stops displaying such built-in actions as "Open folder" or "Take no action".This is usually caused by a misconfigured event handler. (I'm not sure yet how the handler gets misconfigured, though.)

  1. Run regedit.
  2. Navigate to HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\AutoplayHandlers\EventHandlers.
  3. Find the offending value and delete it. Often problems are caused by Picasa2ImportPicturesOnArrival.

ntpasswd, chntpw and group membership

After promoting a user to Administrators with chntpw or ntpasswd, the Administrators group becomes impossible to access (returning "Invalid argument"). This is caused by chntpw incrementing the member count in SAM, but failing to actually append the SID of the new member. (Similarly, the Users group has its member count decremented, but the old SID is still there. This does not result in an error because Windows just ignores the rest.) Fixing this requires some dark magic.

  1. Run regedit as LocalSystem (using psexec or similar hacks).
  2. Navigate to HKLM\SAM\SAM\Domains\Builtin\Alias.
  3. Fix Administrators group: In 00000220\C, decrement dword at 0030h.(a dword is 4 bytes, big-endian)
  4. Fix Users group: In 00000221\C, increment dword at 0030h.
  5. Fix groups of the "promoted" user: In subkey Members\authority\relative, change the "(Default)" value to 21 02 00 00 (RID 545, the built-in Users group). Use "Modify binary data" for this.

ConsoleKit and local sessions

After upgrading an Arch system that has been untouched for two months, ConsoleKit sessions created by startx were no longer marked as active. Apparently, pam_loginuid.so is now needed in order for ConsoleKit to consider the session to be "local".

# from /etc/pam.d/login:
session		optional	pam_loginuid.so
-session	optional	pam_ck_connector.so

Inserting processes into a pipeline

Window one: Whoops, forgot pv...

$ tar c foo | gzip > foo.tgz

Window two: Create two named pipes.

$ mkfifo /tmp/{in,out}
$ pv /tmp/in > /tmp/out

Window three: Run gdb on the writer.

$ gdb -p `pgrep -x tar`
; close stdout (fd 1)
(gdb) p close(1)
$1 = 0
; open a fifo for writing (1 == O_WRONLY)
(gdb) p open("/tmp/in", 1)
$2 = 1
; we got lucky and received fd 1 again
; in some cases, it would be necessary to do `p dup2(x, 1)`

Window four ...and on the reader.

$ gdb -p `pgrep -x gzip`
; close stdin (fd 0)
(gdb) p close(0)
$1 = 0
; for open(), 0 == O_RDONLY
(gdb) p open("/tmp/out", 0)
$2 = 0
; see above note regarding fds
(gdb) detach
Detaching from program: /bin/gzip, process 1900
(gdb) q

Back to window three.

(gdb) detach
Detaching from program: /bin/tar, process 1899
(gdb) q

Sharing Bluetooth link keys between Windows and Linux

Post moved to bluetooth-key-sharing.

ConsoleKit

(Update: Since the writing of this post, I moved on to GNOME 3 with GDM, and ConsoleKit was replaced with systemd, so almost everything in this post is out-of-date. Even for those still using startx, the necessary setup is much, much simpler. – 2013-07-04)

Just spent three days getting the {Console,Device,Policy}Kit fuckery allow me to mount disks from Nautilus.

Apparently this is used to be needed:

  1. session optional pam_ck_connector.so in PAM config (more specifically, /etc/pam.d/login) to create the first ConsoleKit session;
  2. ck-launch-session in ~/.xinitrc to create the second session, with X11 attached;
  3. DBus running, with both system and session buses;
  4. dbus-launch inside the second (X11) ConsoleKit session because it starts the gvfs-gdu-volume-monitor daemon used by Nautilus;
  5. ...and an authentication agent (such as /usr/lib/polkit-gnome/polkit-gnome-authentication-agent-1) running.

Finding out which part is missing:

# Two sessions with your tty as 'display_device', with one being active
ck-list-sessions
# PolicyKit works
pkcheck --action-id org.freedesktop.udisks.filesystem-mount -u --process $$
pkcheck --action-id org.freedesktop.udisks.filesystem-mount -u \
  --process $(pidof gvfs-gdu-volume-monitor)
# udisks/DevKit works
udisks --enumerate
udisks --mount /dev/sdXY

Having this in ~/.xinitrc makes things easier: Updated for my new configuration, in which startup programs are launched by GNOME or wmiirc (depending on $session).

#!/bin/bash

# xrdb and xsetroot can go here
# Applets, agents, other shit is handled by $session

stack=(
	ck-launch-session
	dbus-launch --exit-with-session
)

[[ $SSH_AUTH_SOCK ]] ||
	stack+=(ssh-agent)

exec "${stack[@]}" ${session:-gnome-session}

GNOME_KEYRING_DIE_IN_A_FIRE

I'm back on Lunix. Installed Arch Linux yesterday, and still trying to make it work just like I'm used to - recreating configs lost a year ago turned out to be easier than expected. Except for some things...

...such as GNOME Keyring, which now doesn't work at all if started from PAM. Apparently, having $GNOME_KEYRING_CONTROL is not enough anymore - the libgnome-keyring library only uses DBus for contacting the Keyring daemon, which doesn't really work when the daemon is started before DBus. Putting gnome-keyring-daemon --start in xinitrc is now needed.

On the other hand, it's actually quite nice to have an easy way to start a daemon like that without having to care about multiple processes, stale environment variables, and such things. Well, at least the problem is reduced to one DBus daemon... If only running ssh-agent were that simple.

Hack the Gibson.freenode.net

On the freenode IRC network, users can get "cloaks" (called "vhosts" elsewhere) signifying their status, group affiliation — or just to hide their real hostname. To avoid clashes with actual hostnames, Freenode's cloaks use slashes / as delimiters: such as freenode/staff/tomaw or archlinux/developer/wonder or unaffiliated/tan. Since you cannot have slashes in your real hostname, cloaks often are used to implement "groups" in access lists (*!*@freenode/staff/* +votsriRfAF is a common sight).

Except you can have slashes in your real hostname.

If you run your DNS server yourself, all it takes is check-names master ignore; in named's options to make it accept un-hostname characters in hostnames. (Apparently, the difference between a "hostname" DNS entry and a "non-hostname" one is the presence of an A or AAAA record. Hostnames are only allowed to have a-z 0-9 -.) The rest is as simple as:

$ORIGIN example.com.

DNS/is/fun AAAA 2001:db8::1337
$ORIGIN 0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa.

7.3.3.1.0.0.0.0.0.0.0.0.0.0.0.0 PTR DNS/is/fun.example.com.

(At the time of writing this post, freenode had been already fixed to reject users attempting this trick with their rDNS. I still sometimes regret not having connected with freenode/staff/grawity.nullroute.eu.org while I still could. – 2013-07-04)

Go chkdsk yourself, NTFS

So I'm stuck with Windows XP for a (long) while. (Pepsi and laptop harddrives do not mix.)

Several days ago, Windows started prompting for a filesystem check of C: during boot. Every single tiem I accepted, it would quickly jump to stage 2 (index check) and proceded with printing a fuckton of lines exactly like this one:

Deleting an index entry from index $O of file 25.

It never finished cleaning up those index entries (I would always interrupt it after 50 or so minutes), and apparently those deletions it had made weren't written to disk either.

After several hours of googling and reading many pages of NTFS documentation (95% of the sites I found were exact copies of "Visual Basic NTFS Programmer's Guide" or the documentation from linux-ntfs), I found out that "file 25" was a NTFS metadata file, \$Extend\$ObjId, used for "open by unique ID" NTFS functionality. Checking with ntfsinfo showed $O being over 80 MB (the same index in another partition was about 12 megs).

So I did an experiment. I booted a Linux CD and removed $ObjId, sacrificing ~160 GB of music, movies and porn. When I returned to Windows, all files were still there and readable. chkdsk did complain about missing indexes, but it happily recreated them in several minutes.

On GVFS

Disclaimer: I have never used KDE. I'm currently writing this on Windows XP. My preferred WM is wmii. I also never participiate in any holy wars, be they OS-related or not.

GNOME has many components which some consider "bloat" and others just plain hate without any reason. Such as gnome-keyring, which many dislike mostly because NetworkManager requires it. But many of those components are a necessary evil.

Such as GVFS, for example. I really prefer a single, consistent interface that handles FTP and SFTP and SMB/CIFS and WebDAV to a bunch of separate mostly-fuse-based {ftp,ssh,smb,dav}fs things of varying reliability. Some of them are no longer maintained, others are buggy. Samba apparently has in-kernel CIFS support - but I have to use mount which requires either r00tness or editing fstab for every damn share. Sure, there's mount.cifs, which can work setuid-root, and which I have to manually chmod u+s everytime I upgrade.

Compare this to GNOME's GVFS, or KDE's Kio. I can open, say, smb://windozebox/music or sftp://nullroute.eu.org/~/.bashrc or even obex://[01:23:45:67:89:ab]/ in any GVFS-compatible program and it works. The filesystem is automatically mounted, using credentials stored in gnome-keyring. To the user there's no difference (other than speed) from a local file. Sure, it's like in Windows, where you open \\box\share\file.txt and it Just Works™. But does that automatically make it bad?

Similar is gnome-keyring. It's the only place, besides the rarely used ~/.netrc, that is actually a working centralized password store. It can even be used for X.509 certs, with any app that supports PKCS#11 (though it's still very beta). I'm tired of having to separately configure each program where to look for my SSL and S/MIME keys.

(Unfortunately, many programs carry that problem to Windows - Pidgin, for example - and even though Windows has a central store for SSL keys and root CAs, they cheerfuly ignore it and use C:\Program Files\FooApp\ca-certs\. I would be less angry if native Windows programs didn't do the same...)

What I do consider bloat: GNOME integrating Avahi into Seahorse and Epiphany. (Those actually depend on Avahi, not just recommend it.) Opera adding widgets and BitTorrent and IRC into a web browser (and very poor implementations at that). Twitter in iTunes. MSN "nudges" in Pidgin. GConf XML hell. The fuckton of X11 startup scripts. And so on...

(And here I got bored.)

dovecot --exec-mail and dotlock

To speed up mail checks, I access Dovecot imapd at my server through a SSH tunnel, which executes dovecot --exec-mail imap over the (multiplexed) SSH connection.

Being launched like this, Dovecot doesn't have the necessary access to create dotlock files in /var/mail, where my inbox is stored. (Usually Dovecot's imap-login process would start imap as root, and then it would switch itself to the group set in mail_privileged_group when necessary.) So I get a ton of messages like this:

Dec 28 21:18:33 wind IMAP(grawity): : file_dotlock_create(/var/mail/grawity) failed: Permission denied (euid=1000(grawity) egid=100(users) missing +w perm: /var/mail) (set mail_privileged_group=mail)

...resulting in Mutt: Mailbox readonly warnings every time I try to delete something.

It would be possible to change the permissions of /var/mail to 01777 (sticky, read/write/execute for everyone), but this feels a little insecure compared to the default 02775 root:mail.

The solution is to give /usr/lib/dovecot/imap access to the mail group using the setgid bit:

chown :mail /usr/lib/dovecot/imap
chmod g+s /usr/lib/dovecot/imap

If you're using a Debian-based distro:

dpkg-statoverride --update --add root mail 2755 /usr/lib/dovecot/imap

unbreaking Calibri.ttf

Calibri, one of Microsoft's ClearType fonts, has a few sets of bitmaps embedded into it, to make it look better when font smoothing is off. Which results in Calibri looking just plain ugly in X11/Freetype when you enable hinting.

To disable embedded bitmaps, put this into your ~/.fonts.conf:

<?xml version="1.0"?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">
<fontconfig>
    <match target="font">
        <test name="family" compare="eq">
            <string>Calibri</string>
            <string>Cambria</string>
        </test>
        <edit name="embeddedbitmap" mode="assign">
            <bool>false</bool>
        </edit>
    </match>
</fontconfig>

Installing Flash Player for Firefox

In, uhh, 13 easy steps.

Previously:

  1. Open Firefox.
  2. Click http://get.adobe.com/flashplayer/ in Google.
  3. Click "Download", download a small .exe (which is a self-installing .zip)
  4. Run the .exe

Now:

  1. Open Firefox.
  2. Click http://get.adobe.com/flashplayer/ in Google.
  3. Uncheck "Free McAffee Security Crap"
  4. Click "Download", get nothing.
  5. Notice the Firefox info bar, approve adobe.com for installing software.
  6. Install the "Adobe DLM" extension.
  7. Click "Restart Firefox", wait for Firefox to restart.
  8. Wait while DLM installs itself.
  9. Wait for DLM to download the Flash Player installer.
  10. Uninstall Adobe DLM from Firefox.
  11. Uninstall Adobe DLM from "Add/Remove Programs"
  12. Notice that about:plugins still lists "getPlusPlus for Adobe", find the goddamn .dll file, and burn it in a fire.
  13. Notice that you could have avoided all of this if you clicked the "If it does not start, click here to download" link.

Dear Adobe, you call this convenient?!