indexwritings › journal

grawity's journal

Untangle Shield & dropped connections

At work, I was investingating an unusually high amount of TCP RST's generated by our Untangle firewall – often you'd have to reload a website once or twice due to missing CSS or images, and sometimes no amount of reloading would help due to 50-70% connections getting dropped no matter what; Chrome's "Network" view would be red all over.

This turns out to have been caused by the Untangle Shield feature, which by default rate-limits the number of connection attempts per client IP address.

The problem is, most connections from our network are NATed before they reach the firewall, so Shield doesn't see individual users – all it sees is 6 IP addresses creating massive amounts of connections, and applies overly strict rate-limiting based on that.

Adding a rule to raise connection limits under Configuration → System seems to solve the problem. (Though, despite what the documentation says, I'm really not sure if it's worth keeping the ratelimit at all.)

Digital signature in Lithuania

Started writing govpki-lithuania about qualified digital signatures in Lithuania.

~/.forward versus spam filters

The usual Unix way of forwarding email – putting the email address in ~/.forward – doesn't work well with spam filters, as some MTAs (including Postfix) reuse the same envelope "FROM" address, making the mail fail SPF checks.

A better method, if the MTA doesn't support this natively, is to manually reinject the message as outgoing mail. For this, you'd have the following in ~/.forward, including the quotes:

"|/usr/bin/sendmail -i"

(With Postfix the "pipe" function works by default, but elsewhere it might need to be enabled.)

Alternatively, if you want to pre-filter messages, procmail also has a forwarding function built in (and reinjects the message the same way, via sendmail). This is useful if, for example, you want to avoid sending all the spam to Gmail:

* ^X-Bogosity:.Spam


recovering from ‘glue objects’ in OpenLDAP – a better solution

Some time ago, I was cleaning up Cluenet's LDAP database and moved a lot of entries under "old stuff" (ou=old-stuff, dc=cluenet, dc=org)… And then I lost them. Lost, as in, they wouldn't show up in Apache Directory Studio, nor in LBE, although I was sure I had seen them somewhere.

Finally I went to find them and noticed that the parent container actually didn't exist, even though the entries did. (I'm not sure how that happened…) So of course there's no way Directory Studio could've known about those "detached" entries without listing the entire database.

So now that I found the problem, naturally I tried to create that ou=old-stuff, but only got an error message saying "Entry already exists" – and trying to delete it gave me a "No such entry" instead. Running slapcat on the server told me that the entry does exist, but as a special magic "glue" objectClass which is visible via syncrepl but not to regular LDAP queries.

There was an older blog post about exactly this situation, where the author dumps the entire database using slapadd and reloads a fixed dump. But it turns out that none of that is necessary, at least with modern OpenLDAP versions – it can be done entirely using LDAP operations – you can simply delete the "glue" objectClass and replace it with a normal one.

First, since the glue entry exists, it can be read (and modified) using the manageDSAit control, which disables all magic behavior (such as aliases or referrals) and shows the underlying structure.

$ ldapsearch -b ou=Foo,dc=example,dc=org
dn: cn=bar,ou=Foo,dc=example,dc=org
objectClass: device

$ ldapsearch -b ou=Foo,dc=example,dc=org -M
dn: ou=Foo,dc=example,dc=org
objectClass: glue

dn: cn=Bar,ou=Foo,dc=example,dc=org
objectClass: device

Second, normally the only way to change an entry's structural class is by deleting & re-creating the entry. Since we don't want to delete all sub-entries, we can use the relax control (described in draft-zeilenga-ldap-relax), which removes this particular modify restriction. So in the end, the fix looks like this:

$ ldapmodify -M -e relax <<EOF
dn: ou=Foo,dc=example,dc=org
delete: objectClass
objectClass: glue
add: objectClass
objectClass: organizationalUnit
add: ou
ou: foo

(Though, interestingly, relax is not required when changing from "glue" to another object class, but it is useful to remember in general.)

TLS in Dell OpenManage

Dell OpenManage Server Administrator (OMSA) is accessed through its built-in web server, which always uses 512-bit DH parameters for HTTPS, so neither Firefox nor Chrome will connect to it anymore.

One fix/workaround is to use ECDH instead – it's supported by OMSA, but disabled by default. Open OMSA using Internet Explorer, and under Preferences → General Settings change the "SSL Encryption" mode to "Auto Negotiate".

(The default mode is "128-bit or Higher", which supposedly has a list of 'strong' ciphersuites, but wasn't updated to include the strongest ones.)

disabling Firefox snippets

I'm not really bothered by Firefox downloading stuff to be shown on about:home – I doubt the CDN keeps logs for long enough to matter, and it's lost in the noise among millions of other Fx users anyway.

It's the downloaded stuff itself that annoys me more and more. Even compared to e.g. Google doodles, it's seriously distracting to see a YouTube video suddenly appear on the startup page, or have to watch a gif over Remote Desktop.

There are instructions on how to disable it, of course, but I didn't like that they involved manually deleting files from the Firefox profile directory, so I found a better way in mozwiki's Snippet Service page:

  1. Visit about:config
  2. Set browser.aboutHomeSnippets.updateUrl to empty value
  3. Visit about:home
  4. Open the Developer Console
  5. Call gSnippetsMap.clear(); from the console
  6. Reload the home page.

I might even push this through Group Policy at work – along with other settings like disabling Pocket or shutting up the "first-time" clutter.

keyrings, Chrome & GNOME

I used to think that the password autofill delay in Chromium was some sort of security feature, pausing for a second after load to trip some scripts.

Nope. Turns out, Chromium makes ~2000 DBus calls to gnome-keyring (that is, the running fd.o Secret Service) to retrieve all credentials it has stored. Synchronously, one method call after another, two calls per entry. (Yeah, I do have a lot of passwords stored.)

Well, not every time – it does depend on the site. Sometimes Chromium is smart enough to search for the exact signon_realm that it needs; but as far as I can see, it only does that when visiting a site by IP address. When the domain is known, however, it does a wildcard search and filters results client-side.

Perhaps it's needed for the “passwords for related domains” feature? I'm sure that could be done in a more efficient way, like attaching a metadata field.

Anyway, this can be temporarily “fixed” by telling Chromium to use its own SQLite database, using --password-store=basic as a command-line option. It takes a while for Chromium to reimport everything, but once it's done the actual autofill becomes instant.

(The downside is that the password storage is no longer encrypted, though, so moving it to ecryptfs under ~/Private/ can be necessary.)

basic QoS notes

So I was always annoyed by the fact that almost any program uploading data over my slow ADSL line would clog the entire connection by default, resulting in huge latency (1.5–2 seconds) and packet loss. Rather annoying when your phone insists on uploading dozens of megabytes of photos (in duplicate) as soon as there's WiFi.

With my current router, though (an ADB Broadband one, given by ISP) it seems there is an easy way to improve it by setting up a QoS queue, under “Home / Settings / Routing and QoS / QoS Queues”.

Add a new queue (or edit existing if one already exists). Make sure it is set to enabled, and the correct egress interface is chosen (usually “Ethernet over ATM, 8/35” corresponding to the primary ADSL circuit) – the default of “all interfaces” seems to drop everything on the floor.

Then select BLUE as the dropping policy. Other choices are RED and WRED, which Wikipedia says need quite a bit of manual adjustment, and “Drop tail”, which is just primitive compared to the other ones. There's no CoDel though.

Defaults are fine for the rest (including “None” as the shaping type). As soon as the queue is active, overall latency should drop to 200–500 ms, which is a bit more bearable.

(Sigh. I can already imagine a network nerd from ServerFault reading this and sneering, “how could you possibly not know that?” Well, fuck you too.)

HP LaserJet, WiFi printing, and manual duplex

I've been trying to figure out network printing to our new HP LaserJet. Unlike the old Canon Pixma, this one has built-in WiFi support. Also unlike the Pixma, it doesn't support auto-duplex – the driver needs to prompt you to put the pages back in the paper tray. Which means the driver needs an interactive UI and bidirectional communication with the printer. Which is a pain.

The LaserJet supports most standard network protocols: mDNS, WS-Discovery, SLP for discovery, SNMP for monitoring, and JetDirect for printing, so you can just connect to port 9100 and send PCL to it. Windows can do that natively via the "Standard TCP/IP monitor" (which calls the protocol "RAW"), and that's what the auto-discovery also sets up.

However, the standard monitor seems to be too basic – the only form of feedback it supports is retrieving page count and stuff over SNMP. But as soon as the driver wants to show the manual-duplex dialog, it just drops the network connection and screws up the job.

HP drivers work around it by installing "Advanced TCP/IP monitor", mwtcpmon.dll, and having their own printer discovery tool use that instead of the standard one. The "Advanced" monitor has a settings panel almost identical to the regular one, but supports more PCL features, such as retrieving printer status over the same PCL connection without needing SNMP.

The manual-duplex confirmation dialog also works, however, it seems to use some custom HTTP messaging on port 8080, instead of standard PCL:

POST /dev/controlPanel.xml HTTP/1.1
USER-AGENT:hp Proxy/3.0

<?xml version="1.0" encoding="UTF-8" ?>
<controlPanel xmlns="…" xmlns:xsi="…" xsi:schemaLocation="…">

HTTP/1.1 202 Accepted
Server: Mrvl-R1_0

Oh well, at least it works. On Windows. I still haven't quite figured out how CUPS is going to deal with the 'interactive' part, even though HPLIP seems to be fairly good overall.


I’m still trying to understand one particular design choice of the ‘modern’ web.

Many websites these days have started using a sort of “block” or “chip” style for their <code> tags – with extra padding, a distinct background, and sometimes even a border, to distinguish from the surrounding text. Like this, for example, taken from the Git Book:

This leaves four important entries: the HEAD and (yet to be created) index files, and the objects and refs directories. These are the core parts of Git. The objects directory stores all the content for your database, the refs directory stores pointers into commit objects in that data (branches), the HEAD file points to the branch you currently have checked out, and the index file is where Git stores your staging area information. You’ll now look at each of these sections in detail to see how Git operates.

At first look it looks kinda pretty, but it’s distracting, it breaks up the sentences, most of the time it’s overdone to the point of making text harder to read. Isn’t a good monospace font already enough? Compare:

This leaves four important entries: the HEAD and (yet to be created) index files, and the objects and refs directories. These are the core parts of Git. The objects directory stores all the content for your database, the refs directory stores pointers into commit objects in that data (branches), the HEAD file points to the branch you currently have checked out, and the index file is where Git stores your staging area information. You’ll now look at each of these sections in detail to see how Git operates.


The login/directory server I manage at work recently got a hardware upgrade, from an old Pentium III with a 16 GB disk (doubling as my table/footrest) to a less old i5 with two half-terabyte disks. (In hindsight, I should have set up a RAID, but for some reason I decided to use the second disk for rsync backups instead.)

Anyway, since I have some spare disk space online, I decided to host some of my mirrors on it, as well as a pile of IRC software I had sitting on my laptop. See fs1:mirrors/, it's not very organized.

Also enabled opportunistic encryption for the site, so Firefox 37 will encrypt the communications even when using plain http://. Though it's somewhat strange that Firefox uses Alt-Svc: headers for this, while Chrome has Alternate-Protocol from earlier.

Alt-Svc: spdy/3.1=":443" (from Firefox)
Alternate-Protocol: 443:npn-spdy/3.1 (from CloudFlare)
Alternate-Protocol: 80:quic,p=0.5 (from Google)


At work, we have various business-oriented software set up – some for use by students, some used by the actual administration. Some of them are for accounting/bookkeeping, others for "process management", and so on. All of them are awful.

While I don't actually use any of those programs daily, I do manage the network and servers they all run on, and I've noticed that all of them do almost everything client-side. More specifically, they do all security checks on the client side. I mean, those programs store accounting information, you'd think the author would want to make it more secure than the average? Apparently not.

All these programs ask you to configure the DB server address/username/password on first run. Later you get a generic login dialog, the program connects directly to the SQL server, selects your account details from the 'users' table, compares the received password hash against what you just entered (similar to /etc/shadow with NIS), and hides some menus & buttons depending on your privilege bits.

This would be fine if it were a web app, but I'm talking about Windows desktop programs, where these checks are trivial to bypass using a regular debugger.

You might not even need to do that, though, since client-side checks automatically mean that the program itself usually has unrestricted access to the database – hell, if you let the vendor install it, they'll always try to configure all clients to connect as either 'root' or 'SYSDBA', depending on whether it's MySQL or FireBird on the other end.

All those login dialogs, and all you need to pwn it is to read config.ini. (That is, assuming they at least change the default password; some don't even do that. I just had one vendor tell me that they've been using the default password for the super-privileged 'SYSDBA' account because they "have never had problems with this in the past".)

If that wasn't bad enough, this new program I've been installing today certainly tops everything. Remember that database configuration dialog? This one only needs a hostname because it uses a hardcoded MySQL account name and password for everything. Remember the hash comparison? This one doesn't even do that; the password column is in plain text.

Let me say it again. You can send a hardcoded username and password to the MySQL server and do a "SELECT * FROM users" to get every user's login details. Feel free to "DELETE FROM every damn table" while you're at it, too.

In other words, oh gods why are we spending thousands on this shit.


year 2014

year 2013

year 2012

year 2011

year 2010

year 2009