hardware, linux February 17, 2018 Jack No comments

Router Refresh: Netgear R7800 as a torrenting NAS and PXE server with LEDE

More than a few years ago, I invested in a Netgear WNDR 3800. I intentionally bought it because it was supported by OpenWRT, a Linux based firmware, similar to DD-WRT and capable of running a lot of extra services at the center of your network.

Time marches on however, and despite the fact that the WNDR 3800 was the Rolls-Royce of routers five or six years ago, it started to show its age. It’s 680 Mhz MIPS processor wasn’t beefy enough that I could run a VPN endpoint on the router with any amount of bandwidth, for example, but the main reason for an upgrade was its lack of support for 802.11ac, which was standardized after the 3800 was on the market, much less 802.11ac “wave 2” which is the current best option.

In the meantime, I also became an AMD employee and suddenly had an embarrassing array of Intel processors sitting on my home network. While I can’t reasonably go out and replace every machine in the name of some sort of ideological purity (although I did promptly build a Ryzen 7 1700X rig for myself), eliminating some of these Intel devices has a certain appeal to me. Case in point, the little Celeron machine serving as a NAS. At one point this box ran a desktop on the TV via HDMI, controlled by Synergy, SSH, Kodi, or PS3 controllers over a Bluetooth dongle. Since then we’ve started using Steam Links which covers the HDMI/controller case much, much better so the Celeron basically existed just to run the RAID, Samba, Transmission, and Sonarr. None of which requires a particularly beefy machine, or HDMI any longer.

So, to upgrade my home network and potentially eliminate a redundant machine, I decided it was time to throw down on a new router setup and, to keep this from being an unmaintainable mess as soon as I’ve moved on, I figured I’d document how I set everything up.

The Hardware

I initially looked at some Ubiquiti hardware. The Power-over-Ethernet single cable approach was appealing, as were the sleek interfaces, but it’s clear those devices are designed for scale out that I didn’t need, and wouldn’t be much help for replacing a NAS either.

So I settled on the Netgear R7800 “Nighthawk X4S” which looks a bit like a stealth bomber.

I’ll admit I was showing a bit of favoritism to Netgear after my 3800 was such a workhorse, but so far it’s been a good choice. Instead of a rickety 680 Mhz MIPS, this bad boy came with a two thread 1.7Ghz ARMv7 processor and 512M of RAM. Dual USB 3.0 ports (or eSATA if you prefer). 802.11ac wave 2 support. Pretty standard wired networking (4xLAN 1xWAN).

I paid about $220 for it at my local Fry’s (similar prices on Amazon/Newegg) and while it’s not objectively the best router on the market, I didn’t feel the extra features of some of the more expensive models warranted the extra cost. Most of them have similar processors and RAM sizes, similar ranges etc. Most importantly, however, the R7800 has pre-built OpenWRT images.

I also purchased a HornetTek four bay SATA enclosure that supports USB 3.0.

I hadn’t ever heard of HornetTek but SATA enclosures are simple devices and so far it’s worked admirably. It doesn’t do any sort of hardware RAID or anything, it just enumerates the four drives separately. I chose it specifically because it had four bays instead of two which would allow me to run my 2x2TB RAID devices along with a single 120G SSD.

LEDE

I won’t cover the basics about router setup, running ethernet cables, plugging in the enclosure etc. It’s all rather common knowledge and if you can’t figure out that part based on the diagrams that came with your router, then you need a much more basic page than this one, trust me.

Once your router is online running the factory firmware, the first thing you want to do is install LEDE, which is now the parent project of OpenWRT. The switch is simple, just make sure you grab the right model and choose a -factory.img from their download page which you can then install using the factory firmware’s firmware upgrade mechanism.


Caveat 1: Snapshot Images

Here’s the first little bump in the setup. Later, we’ll want to use a standard Linux tool, mdadm to setup our RAID. Unfortunately, the existing release of LEDE (17.01.4 from October 2017) doesn’t include kernel support for “direct IO” that is required for software RAID. The wise developers of LEDE have corrected this minor issue in the meantime, so while we wait for the next official release, the snapshot releases work … for the most part. The issue is that snapshots don’t include the web interface, LUCI, but to access the router via SSH you need to have a password configured… which you can only do through the web interface on a fresh install. As a result, until the next official release you may need to install a release image, set a password, then install a sysupgrade image for a snapshot. Then you can SSH into the router and install luci with opkg update; opkg install luci.


Once you’ve installed LEDE, you can tweak all of the standard router knobs. WiFi SSID, encryption, static DHCP leases etc.

RAID

With the drive bay plugged in and powered on, we need the USB storage kernel modules to enumerate the devices, mdadm as well as kernel support for RAID.

With my setup, I have two 2TB drives. Each drive has two partitions. A small rare-use backup partition that is RAID1 (mirrored) across the drives, and a larger RAID0 (striping) partition I use for torrents. The idea being that if a drive fails the (replaceable) torrents are toast, and I can recover the backup off of the remaining disk. This isn’t ideal from the point of view that I’m putting wear on both backup disks that may fail around the same time, but this backup is in turn backed up on other machines.

Anyway, with the proper modules installed, this is my drive setup:

Device Partition 1 Partition 2
sda swap extra
sdb backup torrents
sdc backup torrents

Creating a RAID

There are a ton of resources to teach you how to create a RAID with mdadm. I used the information in the ever-useful Arch wiki on RAID installation. The short version, however, is that you install a number of disks, partition them identically (or as nearly as possible), and then use mdadm to create the array. I created my array previously, so I haven’t done this in a LEDE environment, but if I just had the disks partitioned I would have created my arrays with these commands:

After the initial creation the raid devices (i.e. /dev/md0) are just like standard block devices, so you’d throw a filesystem on them (mkfs.ext4 /dev/md0) and mount them as usual.

Configuring mdadm

There is no LUCI integration for mdadm, so I hand configured this in /etc/config/mdadm

I don’t think the name is important, but I transferred the same name from when my RAID was created (unsurprisingly during an Arch install, thus the name). I also would have preferred to use UUIDs from my old mdadm.conf but that brings me to another issue. I’m not even certain the above configuration is correct because..


Caveat 2: mdadm segfaults

This may be an issue with the mdadm packaged in the snapshot tree, but anytime I ran any command mdadm that parsed a config, it segfaulted. The mdadm.conf that the LEDE mdadm package creates based on the above LEDE config appears to be correct, but the init script just assumes everything worked out when invoking it. I haven’t spent any time trying to figure out who is “responsible” for this, whether it’s mdadm being broken, any LEDE patches, or if this segfault is just a weird symptom of the kernel missing some sort of support (which is how the lack of direct IO support revealed itself).

To avoid parsing a config file, I thought about editing /etc/rc.local, but that code is only invoked after the other init scripts are run, and I want the RAID running before other init scripts like Samba or Transmission start running. So, as a really shitty hack, I edited my installed copy of /etc/init.d/mdadm and replaced a line that does the final invocation with two lines that manually start the RAID devices without parsing a config.

Once I’m running an actual release version of LEDE instead of snapshots, I’ll take a closer look at this.


Automounting

Once mdadm has been properly configured (or horribly hacked, depending) we can setup mount points. Fortunately, LUCI has native support for this if you install the block-mount package with opkg install block-mount. Once that’s been installed, you should see a “Mount Points” option underneath the “System” tab in LUCI, which will take you to a screen where you can add mounted filesystems. Here’s an example shot of my setup:

As you can see, I’ve got two RAID mounts at /mnt/backup and /mnt/torrents, along with a small SSD that’s providing the 4G of swap space (to be safe) and an extra partition that we’ll use later to host TFTP/NBD files as well as running Sonarr.

At this stage, you should be able to reboot the router and have the RAID come up and get mounted automatically.

Transmission

My next step was to get the router to deal with torrents. Fortunately, this is a very common usecase for LEDE and it’s well supported. I use the web interface and the RPC API (to communicate with Transdroid, an Android torrent app).

For the most part, the default selections for Transmission are good. It’s configured to allow connection from LAN addresses without a password. With the LUCI app installed, there’s a new option “Transmission” under the “Services” tab that will let you set the download directory to your USB storage (i.e. /mnt/torrents in my case). The settings do seem a little conservative however. I upped the cache to 16M from 2M to give Transmission a bit more memory to fill before hitting the disk, for example, just because I still have quite a bit of headroom in my 512M of RAM.

Once you’ve pointed Transmission to your torrent drive, it should be good to go.

One thing to take note of with Transmission is that by default its configuration directory is under /tmp and this is lost on reboot. This includes any settings you set from outside LUCI through the Transmission interface itself, as well as all of the accumulated torrents you’ve uploaded. In general this isn’t an issue, but if your router goes down be prepared to re-upload torrents to complete them or meet any tracker seeding requirements. If this is a problem, you can shift the Transmission config directory to a real disk instead of /tmp.

Make sure that your torrents directory is writable by transmission as well:

At this point, you should be able to upload a torrent via the web interface at 192.168.1.1:9091.

Samba

Now that you have a disk to fill up, you need to be able to share all those files. Fortunately, LEDE has great support for Samba which most operating systems and media players support to some extent. Simply install Samba and the corresponding LUCI app.

Now, under LUCI’s “Services” tab, there’s a “Network Share” option where you can configure individual shares. Here’s a shot of my setup:

Pretty straightforward, enabling guest access to each share.


Caveat 3: Permissions

One issue I ran into was ensuring that Samba guests and Transmission both have full access to the drive. By default, Samba maps the guest user to the group-less ‘nobody’ user that won’t have permissions to write the torrents directory by default. I chose to edit the template, either through the “Services” -> “Network Shares” -> “Edit Template” box, or by directly editing /etc/samba/smb.conf.template to set

Fortunately, this template is preserved during sysupgrade. Initially, I just made the torrents directory owned by nobody:transmission as a sort of compromise, but the files created by Transmission are owned by the transmission user by default so it caused me to lose write access through Samba guests after I did the initial manual chown.


At this stage, you should be able to access your torrents through your standard SMB capable programs. Kodi. Nautilus/Nemo. Windows Explorer etc.

Sonarr

Here’s where we deviate from the well worn path of LEDE. Sonarr is a great web app for tracking TV shows, watching schedules and automatically downloading torrents from your configured trackers (or Usenet). Unfortunately, it also depends on Mono and some other dependencies that are way out of scope for LEDE packages. I’m not a fan of Mono, but Sonarr is a massive leap up from other solutions like Sickrage in my experience.

So, since we’re moving well out of the router wheelhouse, and I don’t want to do something crazy time consuming like packaging Sonarr and all of its dependencies myself, we’re going to rely on our old friends at Debian to do the heavy lifting for us with their armhf port.

Setting up a Debian chroot

Fortunately, the LEDE developers make this really easy. The first step is to install debootstrap, which already has a package.

Then, we install our pocket Debian. I chose the Debian rolling release, ‘sid’ because I’m very comfortable with Debian if it breaks, but you may want to stick to ‘jessie’ or ‘stretch’ for their latest official release, or the upcoming one. Frankly, I don’t imagine I’ll be updating this chroot very often so release isn’t that important, but I can vouch for ‘sid’ including all of the Mono dependencies I needed.

I have my 116G ext4 partition mounted at /mnt/aux, separate from my RAID/Transmission/Samba setup. Debian chroots are pretty lightweight when you only install one thing, even pulling the entire Mono ecosystem my current chroot is under 2G. Obviously far too big for router flash but hardly a real issue for average sized USB storage. There’s also nothing keep us from hosting this chroot on RAID devices either, I just happened to have a spare disk since I was scrapping a machine.

Note that armhf is the proper architecture for the IPQ806x in the R7800 and I imagine for all of the other recent embedded ARM variants until arm64 filters down to routers, but it’s possible that your router has a different architecture, so choose wisely.

This will take a bit.

As an aside, the strategy of chroots is very useful as a sort of primitive proto-container like this and with a little more legwork we could theoretically run all of our other daemons out of this Debian environment as well. For the most part, there really isn’t a reason to – the packages provided with LEDE are sufficient and minimal – but if you needed a newer or heavier version of something (say Samba 4 instead of Samba 3.6, or a full blown mail server) it saves you a lot of effort to just graft in a working system than if you homebrewed your own packages.

Once debootstrap completes, I recommend mounting some filesystems into it

The first three mounts just make sure that /proc, /dev, and /dev/pts are present so that programs in the Debian chroot won’t get confused if they rely on any of these virtual entries, or try to create pseudo-terminals. Without these mounted, you can end up with many different types of misbehavior. Mono, for example, won’t even try to run if /proc isn’t mounted. Debian’s apt-get will complain about missing /dev/pts as well, and some package installation scripts may fail depending on how finicky they are about such things.

In theory, we should bind mount /sys as well, but in my experience that’s unnecessary.

The last mount is to make sure that the torrent drive is accessible to Sonarr, which will import and rename items for you if it can. If you just want Sonarr to dispatch torrents and not manage the downloaded files this is probably unnecessary.

Now you should be able to enter the Debian environment like this:

You need to specify /bin/bash because the LEDE busybox shell is actually ash. Once you’re in the chroot, make sure apt-get is operational.

You shouldn’t have an upgrades since you just installed it, but it should gracefully tell you so.

Installing Sonarr

Now the easy part. Thanks to the Sonarr devs, their site includes instructions to install on Debian which you can follow verbatim.

Start by adding the Sonarr public key to your chroot’s keyring and their repository to your apt sources.

Then you’re read to actually install Sonarr, which is still called nzbdrone after the initial Usenet implementation.

Running Sonarr

From outside the chroot (use ‘exit’ if you’re still in it) you can always chroot into Debian for just one command instead of invoking a shell.

To integrate the chrooted Sonarr with LEDE, just paste the following text into /etc/init.d/sonarr

Then Sonarr should show up as a service in LUCI’s “System” -> “Startup” screen at the very bottom along with Transmission and any other lower priority init daemons.

A few notes on the init script. We need to invoke mono with env so we can set $HOME properly and Sonarr will properly choose /root/.config instead of the completely inappropriate /.config. This may not be strictly necessary, but it means that if you chroot into the Debian environment and run mono by hand, you get the same result as starting it with this init script. Also note that the stop_service() manually kills Sonarr so it won’t be occupying our mounts. This ensures we can properly unmount but it’s also probably not good practice. I did attempt to add the virtual filesystem mounts to the official LEDE storage mount point system (what we used above for the RAID devices), but it didn’t like the bind as written, so I just added it here where I have full control over the command invoked.

To keep this service intact after a sysupgrade, we can add it to /etc/sysupgrade.conf

Configuring Sonarr

With Sonarr running, you can access its web interface from 192.168.1.1:8989 and because it’s in the Debian chroot, all of its internal settings will be persistent.

Other resources have covered setting up Sonarr (like the Sonarr Wiki), but the short version is that you want to setup at least one Indexer (source of torrents found under System -> Indexers). These will encompass most of the major private TV trackers and you’ll likely need to paste an API key into their configuration. Sonarr provides a convenient “Test” button that will do a basic sanity check without actually downloading anything.

After you have an indexer setup, add a Download Client (under System -> Download Clients) for your local Transmission server:

Once you’ve done that, you’re all set to start adding shows you’re interested in. The search is under “Series -> Add Series” and will allow you to add each series with options for the desired profile. I suggest starting with a 720p profile by default and updating to 1080p for any series you think are worth it. Personally, spending 50G downloading 1080 Blu-ray rips of an office sitcom seems like a waste. Especially when…


Caveat 4: Sonarr double disk space

One flaw with Sonarr is that it’s hard to balance the desire to organize and rename files with the need to seed consistently. In theory, Sonarr supports hard-linking files instead of copying them (which would allow the filename to be different without having an extra copy of the data) but I have not seen this actually work. I just make sure I go through and cleanup my torrent list every once in awhile.

There are two options I want to point out though. The first is that you don’t necessarily have to use Sonarr to organize at all by disabling “Completed Download Handling” in the download client settings, this keeps Sonarr from copying files and it will operate only as a torrent dispatcher. The second option is that you can enable “Ignore Deleted Episodes” in the “Media Management” settings. This will allow you to hand delete series you’re done watching without Sonarr freaking out and re-downloading them.


In my setup, I chose to allow Sonarr to import/rename everything, but to ignore deleted episodes. I go in and clear my torrents on a regular basis and if I need to free up further space, I go and delete shows I’m not actively watching.

Sonarr is pretty intelligent about downloading as well. When you add a series, it will ask you what part of the series you want to “monitor” and this monitoring determines whether Sonarr will automatically download that series. New episodes are automatically monitored, but for the purposes of trying out shows monitoring the first season is plenty, or if you’ve caught most of the latest episodes but are a bit behind, just grab the last season, or just future episodes. After adding, this monitor status is indicated with the little banner icon on the individual show pages.

In this screen, the season is monitored (indicated by the big black banner icon), but each of the individual episodes aren’t (small white banner icons) so Sonarr will attempt to download the whole season. If the season was unmonitored, but an episode was, Sonarr would attempt to download the episode alone first, and then fallback on the season if there was no other option.

Anywhere you see a magnifying glass icon, you can manually initiate a search for a specific torrent, and if you click the little head and shoulders icon, it will even let you manually select a release. Both of these can be extremely useful if you’re in a hurry, or to debug why Sonarr picked a specific release over another.

Sonarr can be a tad idiosyncratic however. One little annoyance I have with it is that whenever you add a series, the options you choose (quality profile, season folder settings, etc.) become the defaults for the next series you add. So if you add a daily news show once, the next time you add a series, it will default to a daily show. It’s not the end of the world, of course, but you just need to be careful, especially when doing the initial setup adding multiple shows.

PXE

Moving away from the entertainment sphere, I wanted to be able to PXE boot a Linux install image. For those unfamiliar, PXE (Pre-eXecution-Environment) is a nice, BIOS enabled method to DHCP a wired network interface, fetch a specific file and boot it. This can be quite useful for accessing live distros without a DVD or a USB.

In my case, my work laptop is disappointingly locked down. The BIOS options to boot to USB are disabled, and Windows’ laughable disk encryption Bitlocker apparently doesn’t play nice with UEFI booting so it’s forced into legacy mode. To install Linux, I had to resize the encrypted partition (which required talking to IT for some bullshit passcode to unlock afterward), then PXE boot Arch. Because of being forced into legacy BIOS mode, installing a Windows update is like playing Russian roulette with my bootloader. Point being that I need a PXE server every time Windows randomly borks my disk, and since PXE is intrinsically linked to DHCP the router is the prime location to set it up.

Most of this setup was cribbed from the OpenWRT page for PXE booting, combined with some Arch specific settings on the Arch wiki page for PXE.

syslinux

Before we configure our PXE server, let’s setup a directory to serve. I decided to use more of the auxiliary SSD mounted at /mnt/aux to serve Syslinux‘s PXELINUX, a PXE bootloader capable of loading Linux kernels through the DHCP connection negotiated by the PXE. I downloaded Syslinux 6.03 and untarred it, then copied the binary implementations directly out of the release and into /mnt/aux/tftp, the root of my soon-to-be TFTP server.

The important files here were identified in the OpenWRT wiki, but it includes the basic binary (pxelinux.0) as well as enough support to display a nice menu and boot Linux.

DHCP/TFTP

Unsurprisingly, one place the LEDE project didn’t spare any expense (in terms of configuration options, at least) was in its DNS/DHCP server, dnsmasq. dnsmasq includes a basic TFTP server implementation that we can enable directly in the LUCI interface under “Network” > “DHCP and DNS”. There’s a tab for TFTP settings you switch to and a few settings.

Once you enable the TFTP server, you should be able to PXE boot a wired device (usually by rebooting and mashing F12 or some other BIOS hotkey) and it will get far enough to show you a GRUB like bootloader screen, without any options because you still need to configure PXELINUX.

Using an Arch ISO

Now that we have PXELINUX booting without a configuration, we need to give it something useful to do. I like to use Arch install media as a basic recovery tool or fresh installation environment. It has most of the relevant disk and crypto tools, and network support I need to nip in and fix a broken bootloader, or transfer things off of backup disks.

I torrent my copies of Arch install media, so the router naturally has access to the ISO in /mnt/torrents. My first stab at this, I thought I’d be clever and take advantage of this fact by serving the files out of a loopback mounted ISO, thus using no extra space. This turned out to be impractical because the LEDE mount point system doesn’t appear to support loopback mounts any better than the bind mounts I tried for the chroot. So, instead of adding yet another temporary hack to /etc/rc.local or somewhere else, I decided to just unpack the ISO elsewhere and copy its contents into /mnt/aux/archiso.

Inside, there are a handful of notable files. The kernel and initrd we’ll need PXELINUX to serve up for us is in archiso/arch/boot/x86_64/, files vmlinuz and archiso.img. There’s also the ISO’s built in Syslinux configuration in archiso/arch/boot/syslinux, where we can see just what arguments Arch expects. Specifically, there are three options in archiso_pxe.cfg that enumerate our network boot options. We can choose an HTTP boot, an NFS boot, or an NBD boot.

HTTP boot would be fine and in fact the LUCI interface depends on uhttpd so it would be convenient just to symlink the ISO directory to /www/archiso and call it a day, but this method didn’t work on my system. Arch fetches files from the server with curl which will choke if there’s an issue with SSL like having a self-signed certificate like the default uhttpd server advertises with the luci-ssl package installed. I don’t care to mess with certificates that are basically unavoidably vague (you are visiting 192.168.1.1 after all) so I moved on.

NFS is a good, mature candidate but it’s also extreme overkill in my opinion. We’re only going to have one user at a time, read-only, we don’t need most of the robust features offered by NFS.

Which leads us to NBD, which is less featureful, dead simple to configure, and provides better performance for our usecase anyway.

One advantage that NFS would offer would be the ability to just serve the unpacked archiso directory, compared to NBD which requires access to the .iso, but since I generally have a copy in /mnt/torrents anyway, that isn’t much of an upside compared to the extra complexity and dependencies of NFS. I didn’t perform any metrics on resource usage either, but it’s hard for me to imagine that NFS would end up being lighter on resources than NBD especially when NBD is taking up about 500k of RAM and NFS has so many more bells and whistles. Anyway, I chose to go with NBD.

NBD

The NBD server is trivial to install and configure. Just grab the nbd-server package, and add your configuration to /etc/config/nbd-server.

Here’s an example of my /etc/config/nbd-server

Just make sure the service gets started, and you’re good to go. Unfortunately, this configuration does mean that we’ll have to update this file if we update our Arch ISO, instead of just overwriting /mnt/aux/archiso with the new files.

Final PXELINUX Configuration

Now that we have everything we need unpacked and the NBD server is ready, we can put everything together into our PXELINUX configuration. First, we’ll symlink the kernel and initrd someplace where the TFTP server will serve them up. I decided to just drop the symlinks in the TFTP root directory.

In the same directory, create a pxelinux.cfg directory, and in that directory create a default file. In there, we can paste our configuration.

This is mostly boilerplate, taken from the previously mentioned archiso/arch/boot/syslinux/archiso_pxe.cfg, except the paths are all flat (files are in the TFTP root) and the NBD server has been subbed in. Note that the iommu=off and nomodeset options are custom because my laptop has a buggy IOMMU and its framebuffer gets scrambled if it tries to do early modesetting in the Arch ISO environment. You probably won’t need these options, but if you do end up requiring custom options you don’t want to hand insert from the Syslinux prompt, this is the place for them.

Once you have this file in place, you should be able to boot to an Arch ISO prompt over any typical computer wired into your LAN without messing with USB keys.

Final Thoughts

This is a work in progress. I’m a little disappointed that I had to hack mdadm‘s init script, but other than that most of this configuration should be bulletproof in terms of reboot and sysupgrade. I would have liked to use HTTP for the Arch ISO as well, if the SSL configuration wasn’t in the way. I could uninstall luci-ssl I suppose, but that seems a bit like a lateral move when NBD is working just fine.

I’ve been running with this router configuration for a couple of weeks now and it seems to be rock solid. It may need further tweaks because it seems reluctant to use swap, but at the same time the memory usage generally hovers under the 50% mark. Can’t tell if that’s because 512M is more than enough RAM, or the packages are just so conservatively configured that it needs to be told to use the rest. Regardless, the Netgear R7800 has proven to be a worthy successor to my older WNDR 3800.

books October 31, 2017 Jack No comments

On Mistborn

As with all of my book entries, this is intensely spoilerific.

I had intended to follow up my super serious last post with more ruminating about corporate programming and my new digs at AMD, but writing about that requires a lot of effort now that I’m almost a year into the grind and it’s not feeling novel as much as it’s feeling like… well, work. There’s a reason I barely mentioned IBM on this blog the entire time I worked there.

Anyway, life has gotten pretty hectic and long story short, I’ve made a conscious effort to drop the bottle and pickup the book again and as such I’ve spent a lot of time stone sober absolutely flying through books.

Despite the fact that I’ve been reading a lot of “literary” fiction like Bukowski’s Ham on Rye and the most excellent Middlesex by Jeffrey Eugenides, unpacking those books is a scholarly pursuit that sounds distinctly… un-relaxing to undertake so, I’d like to spend a bit of time writing about the Mistborn trilogy by Brandon Sanderson.

I picked up the first book in a buying frenzy along with a slew of other books that I wanted to read, but after completing Mistborn, I had to move on directly to The Well of Ascension, and that of course led me straight into The Hero of Ages. I read the entire (first) trilogy, somewhere north of 2000 pages, in a few weeks because it’s just plain compelling. This was mid-September, I’m just now getting around to actually finishing this post =/.

Let me say, unequivocally, that I enjoyed the hell out of these books and I’d recommend them to any fantasy fan. Each story is logically self-contained and yet functions in the broader arc. Each character is well thought out, with clear motivations (at least, clear after you know the whole story). I felt truly invested in Vin, Elend, Sazed, Kelsier, and really the rest of Kelsier’s thieving crew from the first book all through the entirety of the story. Every one of these characters grew and evolved over the roughly five or six year span of the novels, and each one in a completely realistic way. There wasn’t a single point through the entire story that I felt like plot points weren’t well underpinned by previous information, or deus ex machina was used, which is especially impressive given that the story literally has gods in it.

Magic

I also really enjoyed the magic system, which is actually how I got the initial Mistborn recommendation from reddit. It’s really refreshing to read a fantasy series with “high magic” where magic users are so common that society has evolved around it, instead of “low magic” where magic is so rare that it’s basically extinct. This is really difficult to pull off since magic is, by definition, world breaking. The changes in society based around magic really added a huge amount of color and realism to the world. From the existence of hazekillers and the prevalence of “dueling canes” to fight Allomancers, to the caste system enforced by rounding up skaa (servant/slave class) with powers, and the Misting thieving crews created to fight back against it. Even the special creatures, mistwraiths/kandra, koloss, and Inquisitors all turn out to be natural extensions of the magic system rather than beasts that just somehow evolved nonsensically or were just created from scratch by the Lord Ruler without any regard for consistency.

Allomancy, Feruchemy, and Hemalurgy are all extremely well thought out and balanced in interesting ways. Allomancy requires fuel that’s consumed, but provides amazing powers. Feruchemy uses metal to hold “charges”, but they only work for each specific feruchemist and each feat of strength is only possible after an interval of weakness. Speed is purchased with lethargy, wakefulness with sleep, memorization with forgetfulness. It’s really beautifully balanced. Hemalurgy provides great power and permanently, but it has the heavy cost of death and diminished returns and the fact that you basically need to be controlled by Ruin itself in order to place a hemalurgic spike in the right place to convey that power.

Progression

One great thing about the Mistborn trilogy is that each books really has a distinct identity.

The first book is very much about rebelling against an oppressive and brutal dictator, the Lord Ruler, who is the immortal God controlling the entire world. This story takes place in a stagnant, but extremely stable, world of basically Victorian England complete with street urchin thieves and a feudal style nobility that are fanatically devoted to that God. Add in an underdog orphan, Vin, that discovers she’s actually a powerful Allomancer and gets involved with a legendary thief, Kelsier, who trains her as part of his budding rebellion to overthrow the Lord Ruler, and you’ve got a great basis for a story. There’s a lot of thieving, spying, and even some courtly balls and romance before the skaa are ignited in rebellion. In this book, a lot of trappings of the universe, the mist that floods the landscape at night, the persistent ashfall, mistwraiths, kandra, Inquisitors all seem like pretty window dressing, but in reality they form the foundation of the overall story. Anyone that reads the setup for this book knows how it ends, our plucky heroes succeed at overthrowing the Lord Ruler, but there’s quite of lot of twists and turns on the way and the consequences of that success are unpredictable.

Which leads us directly to the second book, The Well of Ascension. This book is drastically different in no small part because the society of the Lord Ruler has fallen apart and the Empire is in chaos. Kelsier, the charming leader in Mistborn is dead and his crew is tasked with forming the cabinet of the new king, Elend, Vin’s love interest, who was put in charge after the Lord Ruler’s death. This book is the weakest of the three, like a lot of middle books in a trilogy, because it has to form a bridge between the origin book one and the actual resolution in book three. It spends a lot of time on Elend becoming a leader rather than a scholar, the thieving crew changing into advisors and generals instead of criminals, and Vin herself coming into her own as a noble, Mistborn assassin instead of Kelsier’s sidekick. Most of the characters are in a transitional state that’s as awkward for the readers as it is for the characters, but at the same time Sanderson really does a good job showing how each character adapts to fit their new roles.

The second book’s overall story is driven by politics instead of rebellion. Elend holds Luthadel, the former capital of the Final Empire, with the title of King, but that throne is contested by multiple usurpers in a four way Mexican siege-off. Meanwhile, the mists that were harmless window dressing in the first book, begin to kill people and last longer and longer into the day such that Vin believes that the mists are the Deepness, the mythical enemy that the Lord Rule defeated at the Well of Ascension, so Vin spends a lot of time realizing that she is the new Hero of Ages and that she has to follow the Lord Ruler’s example, go to the Well of Ascension but instead of wielding the power there, like the Lord Ruler did, she has to selflessly give it up to save the world. After a ton of political manuevering, Elend getting deposed as King by the parliamentary Assembly he himself designed, and the Battle of Luthadel resolved by Vin discovering she could control the massive koloss army waiting outside the walls, Vin finds the Well of Ascension, takes the power and releases selflessly… only to realize she’s made a terrible, terrible mistake and has released Ruin, who is basically an evil God that had been manipulating them all the entire time. It was a great subversion that plays so well because we’ve been trained to expect the hero’s noble sacrifice will set things right, but instead the heroes were all acting as agents of Ruin the whole time.

The Hero Ages then opens on a world that is imminently ending. The mists are still getting worse, and now the ashfalls are threatening to choke the world. Only a tiny portion of the Empire is capable of actually growing crops. Elend, who is now a powerful Mistborn thanks to the events of the Well of Ascension, and Vin have worked together to solidify his position as the new Emperor, taking cities to protect them from the new, harsh world nearing the apocalypse.

The third book does a great job tying everything together. The book reframes the entire series conflict in terms of Ruin and its opposite, but equal god, Preservation. This is the book that introduces Hemalurgy properly (after only getting glimpses previously) as the magic art of Ruin, Allomancy as that of Preservation, and Feruchemy as the art of humanity who were created by Preservation and Ruin together. Interestingly, The Hero of Ages also completely reframes the Lord Ruler. The Lord Ruler, initially viewed as a brutal dictator, and then as a selfish impostor Rashek that should have given the power of the Well of Ascension up, was proved to be a good guy. The Deepness, which was the product of Ruin influencing the mists, had to be stopped, so the Lord Ruler took the power of the Well (instead of releasing Ruin) and attempted to burn the mists off by moving the planet closer to the sun, but screwed up. So to deal with that, he creates the ashmounts to spew ash into the sky and insulate the world by reflecting the heat back into space. Of course humans can’t breathe ash and plants can’t grow in it or under a red sun, so he modified them and created microbes to eat ash, and then he was distracted by Ruin to create mistwraiths, kandra, and Inquisitors through hemalurgy. After that, the power of the Well, a sliver of Preservation, faded from him. So really, his heart was in the right place but he only had power for a few minutes, constantly waning, so he had to use his experience to subtly deal with the mistakes he himself made trying to solve the initial problem of the Deepness.

The third book ends with our heroes succesfully averting the end of the world by defeating Ruin in a climactic battle.

The point of this recap, however, is to show that each book is different (revolution, politics, and apocalypse) and yet builds on all of the previous work in inventive ways such that details like the reasons for the Lord Ruler’s brutal behavior, the reason that ash falls from the sky, or how magic works all fit together nicely without spending any time rehashing the same plots.

A Bit of Criticism

It should be obvious at this stage that I really enjoyed these books, and particularly how well the world was constructed and the story told within its confines with the reader’s understanding pleasurably shifting from one page to the next.

That said, if I had to issue a single critique of all three books, it would be that Sanderson spent a lot of time constructing his world and constructing his plot and constructing his characters so that they all interlock perfectly, like a jigsaw puzzle, but the books can come off stilted as a result.

I mentioned before that each twist and turn of the story was well supported by previous text. This generally means that character motivations are clear and well thought out, and without this level of thought there would be a lot of “where did that come from, WTF” moments that are the hallmark of shitty fiction… but it seems like Sanderson is almost too afraid of that criticism that everything needs to be not only consistent but well telegraphed and logical.

For example, in both Well of Ascension and Hero of Ages, the leaders that oppose our main characters. Straff Venture, Cett, Lekal, and later Yomen are hyper rational. This may seem silly to assert considering Straff trusts his clearly insane Mistborn son, Zane, even though he suspects (wrongly) that he’s constantly trying to poison him, and Elend’s old friend Jastes Lekal basically screws himself by bringing an army of koloss he can’t control, but it’s clear that both of them weighed their options carefully and made logical decisions even if their gambits ended up being mistakes. For example, Straff uses Zane because Mistborns are just that powerful. Indeed, if Zane didn’t have his own agenda, he would have been the only tool to win the siege for Straff.

The worst offenders in this regard are Cett and Yomen, however. Cett joins forces with Elend despite Vin murdering basically his entire entourage in front of him. Is this a rational decision? Yes, from the point of view that Cett has his back to the wall and this is his best bet for coming out of the siege alive. However, I think this would be a perfect time for someone to behave irrationally having just witnessed hundreds of his best soldiers slaughtered before his very eyes and coming very close to being assassinated himself and yet Cett doesn’t even beg for his life or get angry. I don’t think he should have suicided or something ridiculous just for the point of spectacle, but maybe he would plot revenge on Elend, or Vin, or maybe he would backstab them at a crucial point, or maybe he’s just a bit more of a dick to everyone in light of his humiliation. Anything but just accepting his fate as a willing tool of his foe for the next book, even if all that changes is a bit of angry dialogue.

Yomen, the religious zealot that took over Cett’s capitol, Fadrex City, while Cett marched on Luthadel is also frustratingly rational. In Hero of Ages, he’s in disbelief that his God (the Lord Ruler deposed in the first book) is actually dead and is keeping the faith by trying to maintain The Final Empire’s culture in Fadrex. That’s all well and good, but when he finally comes to grips with the fact that his God is dead he… abandons that religion and culture and joins forces with Elend like Cett before him. Once again, this is a rational decision (especially since it’s the literal End of the World) but again we have someone who has just had his core beliefs shattered acting with the same cool and logical approach as everyone else. Now, in Yomen’s defense it’s easy to have a logical belief in God when he’s a real person, but his belief was already relying on hand waving after the Lord Ruler’s death so when his religion is proved false it seems more realistic that Yomen would react poorly.

It feels weird to me that I’m effectively complaining that the books were too logical, but in the end it does feel sterile and constructed when everyone behaves this rationally, even under extreme duress, or when their beliefs are utterly destroyed.

Along the same lines, in the Mistborn trilogy, Sanderson has a lot of trouble making the characters’ voices seem distinct after Kelsier dies at the end of the first book. To some extent, the dialogue between Elend, Vin, Sazed, and a lot of the Crew are interchangeable. When they expound on topics, or devise plans, which happens alot throughout the books, you could strip away the attributions of most of the dialogue and you’d never be able to tell who’s speaking except when they reference what they’ve been up to. Dialogue amounts to stating facts or assumptions, tying them together and then agreeing on a plan. The characters are too well aligned. Everyone is equally rational, everyone has roughly equal priorities. Even the occasional argument is taken in stride and everyone proceeds to do their duties without further incident. It’s as if dialogue only exists to convey information, where a more stylistic or character driven author would use more evocative language or even illuminate the character’s state of mind.

One of the best examples of this is when Yomen and Elend are speaking when Elend has infiltrated the first ball in Fadrex. These two are natural enemies and when Elend contrives to sit and talk they… debate the finer points of various books. Okay. Elend leaves the conversation with a greater respect for Yomen. Okay. I mean, there’s nothing wrong with that, if you can suspend disbelief far enough to get Elend and Yomen to sit down at a table at a ball during a siege, but it seems like a missed opportunity to inject a bit of venom into the dialogue, even if it requires a character to be a bit irrational, or prideful, or spiteful.

And that’s the crux of the issue. There are very few flawed characters in Mistborn. There are characters that believe wrong things, or make bad gambles, but there are no characters that are just… assholes, or cowards, or brutes, or just underhanded schemers – which is quite a feat considering the story begins with a crew of criminals. Every main character seems to have “with a heart of gold” tacked onto their descriptions. Vin, Elend, and Sazed are basically paragons, and considering Sazed literally becomes God that’s not so bad, but the other characters chief failing amount to… what? Breeze has a drink sometimes? Even Kelsier, who’s portrayed as something of rogue, legendary thief master, is good to a fault down to completing the mandatory Christ-like sacrifice to start the Church of the Survivor.

The most flawed character I can think of is obviously Zane, but his flaw comes in the form of a literal voice in his head, which is extremely ham handed. Zane still pursues his own agenda by wooing Vin to rule with him but Zane is mostly just a story dead end. He serves the purpose of keeping the action filled Mistborn chases in a book that’s mostly political (Well of Ascension). Otherwise there’s Camon, the leader of Vin’s initial thieving crew that beat her, or maybe Marsh who gave up on the Rebellion (before saving it), or Yeden who leads the rebel forces to slaughter, but all of these characters were relatively minor (2/3 gone by the second half of the first book the other played a small part).

It’s always good for characters to have solid motives, but without some flaws or contrasting priorities to differentiate the characters it starts to feel like all of the main characters are really just aspects of Sanderson, each applying the same logic and the same reasoning to achieve the same goal as any of the other characters would given the information at hand. Which is a shame because Sanderson does a great job developing each character’s plot everywhere else, but even the bad guys feel like they are the same aspects of Sanderson with different goals. The two Big Bads of the trilogy, the Lord Ruler and Ruin, are shown to have cold logic behind their actions. The Lord Ruler brutally prepared his Empire to combat Ruin and survive, while Ruin was fulfilling his part of the bargain with Preservation.

This is why Mistborn fails to produce something that all truly iconic fantasy series need: a good villain. Sanderson is so wrapped up in logic and characters behaving rationally that there is no Lord Voldemort who wants to kill half the world for being impure. There is no King Joffrey that tortures prostitutes for a laugh. There is no Sauron bent on dominion. There’s the Lord Ruler, who’s not such a bad guy once you get to know him, and Ruin that wants to destroy the world because… that’s what Ruin does. These books desperately needed some objective, human scale, pure evil baddie to be defeated in addition to the more ambiguous Lord Ruler and the abstract Ruin.

In the end, despite my criticism I did thoroughly enjoy reading all three books because the world is fantastic and the plot was interesting, even if the characters, no matter how much fondness I have for them, and their dialogue takes a backseat to it. The only thing keeping this trilogy from being truly classic is that it’s so well crafted that it’s impossible to escape the evidence of its creation. True classics, like the Lord of Rings trilogy, give the feeling that the world existed long before the story was told, and will exist long after it has ended. Unfortunately, that requires a story that’s more organic and less constructed, more flawed and less rational than these books delivered. That said, I wouldn’t turn down another jaunt through the world of Mistborn even if I can see behind the curtain.

work January 26, 2017 Jack 5 comments

On Leaving IBM

After working with IBM for over 8 years, this week marks my last with Big Blue. Next week, I’ll start my new job with AMD.

Since I’ve been spending a lot of time over the last month or so reflecting on my time at IBM, I figured I could use this post to collect some thoughts on why I’m leaving.

Why I’m Leaving IBM

My biggest reason for leaving IBM is that I’ve grown weary of being isolated from the people that are important to my work. I’ve worked tangentially with people across the US, Germany, Brazil, Ireland and India, but IBM’s Linux and free software expertise is focused in Australia, namely OzLabs.

Since its acquisition in 2001, OzLabs has been the center of PowerPC Linux. Now, with OpenPower (the leaner, meaner PowerPC), OzLabs controls the Linux port, most of the firmware, and the main bootloader for the platform. In other words, every project in IBM that I was interested in or worked on.

This, in and of itself, was no problem. OzLabs is transparent and fanatically open source. Even non-Linux development is done on public mailing lists with a bevy of git trees. It’s easy to observe or even participate if you’ve got changes in mind.

My problem was that working with OzLabs was simultaneously unavoidable with my interests in IBM, and really really difficult thanks to my being 8000 miles away and outnumbered 10 to 1. Over the course of my IBM career I attempted to bridge that distance, but in the end I found that I need what OzLabs already enjoys and nowhere else in IBM could provide me – a critical mass of local developers to work with.

Let me provide a little context for how this happened, and how I reached that conclusion.

Early History

I spent the first half of my time in IBM (2008-2011) doing the best that I could supporting embedded PowerPC from here in Austin. I learned the architecture from the point of view of weird devices like Cell, Prism (WSP), Chroma (a PCIE card variant of Prism) and Espresso but aside from a brief couple of weeks in 2010 doing Prism bring up with OzLabbers in Raleigh none of this gave me any opportunity to actually learn to be a kernel hacker.

That’s a pretty loaded statement, so let me clarify. I had plenty of opportunities to read and write Linux kernel code. What I didn’t have was any experience actually finding work before coding, or getting work included in Linux after coding. Both of these tasks are critically important, but in those early days work was dropped into my lap and, when I thought I was done, it was handed off to someone else to upstream, or it wasn’t upstreamed at all. For example, the small amount of Prism work I did ended up in Linux years later without my involvement and was subsequently stripped out of Linux without my involvement either.

When there’s a lot of work to do and it’s easy to come by, this arrangement isn’t so bad. At the time my tasks almost universally came from people in Austin, were intended to support people in Austin, and in return I was supported by people in Austin. Even though OzLabs was still the center of PowerPC Linux and my early code was reviewed and submitted by OzLabs, when I needed help or new work I didn’t call them, I talked to people that were a few doors down from my office in Austin.

Unfortunately, embedded PowerPC dried up after Espresso, and that all changed.

BML

Remembering my wonderful experience bringing up Prism and Chroma in 2010, I joined the Bare Metal Linux team with hopes that their reputation for hardcore bring up and CPU enablement work would directly translate into Linux commits.

Circa 2012, the BML team was small and mostly local to Austin, although still led by an OzLabber. POWER 7 was still pretty new and POWER 8 was waiting in the wings.

My first task was actually testing and writing proof of concept support for a new P8 feature called “accelerated switchboard” that included a new instruction PBT (Push Block to Thread). I felt like I was on the right track, I had a new chip feature to bang on and I was in the simulation and lab environment for the main POWER line of server processors instead of the weird embedded devices I cut my teeth on. I was even excited that my team was directly connected to OzLabs.

BML’s reputation was an old one though, and it was earned when getting Linux to run on a new chip required a lot of hypervisor support that didn’t exist when your chip was a simulation or a prototype fresh out of the fab. Linux without a hypervisor (and potentially running on glacially slow hardware simulators) required a lot of custom patches, custom stub firmware, on top of a set of lab tools to actually generate a device tree and load various artifacts into memory on a variety of platforms.

When I joined BML, the codebase was geared around this level of deep involvement. It had a directory full of bit-rotting Linux patches, a rickety build system based on snapshots instead of git trees, as well as a pile of awful Perl hacks used to interface with lab infrastructure. That may sound harsh, but I doubt any of my former teammates would disagree. It was clear that BML evolved from a minor miracle into a useful lab debug tool rather than being designed with that goal in mind.

Meanwhile, back in OzLabs, work was being done to effectively turn hypervisor-less Linux on Power into a fully supported platform (OpenPower). This wasn’t a bad thing. In fact, from my perspective, OpenPower is not only a great step for PowerPC but also the ultimate validation of BML’s purpose. However, as OpenPower started to gain traction, “bare metal Linux” went from a complex feat of hackery that could justify four or five hardcore engineers working in concert, to being a thin layer of scripts around a platform that was being professionally supported by Australia.

This shrinkage explains two things that I didn’t understand until later.

First, why I never actually worked with my teammates on anything long term. Almost all of us were being loaned out to other projects because there wasn’t enough work to do on BML itself to require the headcount. We were all working on line items that were, at best, tangentially related to BML. Writing drivers, working with research, debugging simulators, integrating the lab with system testing and so on. Having to stop and hack on the BML lab infrastructure we were technically attached to was often an annoying and inconvenient distraction from the other tasks on which we had been focused.

Second, how in that same time period I ended up with exactly zero Linux patches. BML itself didn’t require kernel support anymore (beyond providing builds) thanks to OpenPower. Even the CPU enablement stuff I did get was either dropped from the final release (AS), taken out of my hands (64 bit decrementer, although that was because I was loaned out again), or both (load monitor).

BML was less a team and more a holding zone for kernel level engineers to be assigned wherever needed. I get how this had utility for IBM, and maybe even OzLabs sometimes, but it was a real shit situation to be in for someone like me. In my perspective, bouncing from project to project was just a way to never make a significant difference anywhere. To make matters worse, it was apparent that a lot of interesting work was being done during my time in BML, work that I desperately wanted to do, but had been absorbed by OzLabs while I was unwittingly wasting my time doing shit like implementing merge sorts on FPGA devices or trying to debug ancient Perl scripts.

When I finally looked up long enough to get some context, I felt like I was busy scarfing down dog food while OzLabs was just finishing up the filet mignon. By the time I got a crack at their leftovers the competition for that work was fierce and still dominated by other OzLabbers that could divvy up work and help each other while chatting over coffee instead of contending with a 17 hour time difference and tedious emails.

This is why I wish that I had learned to work more closely with OzLabs, or work more independently earlier in my career when it wasn’t mandatory and the stakes were lower. Having done only inconsequential work, I was still an unknown quantity to OzLabs on top of operating with a severe handicap compared to other options. Certainly not the first person you’d think of when you needed work done with a minimum of hassle. Because of this and the fact that I didn’t know how to find alternative work myself, I started to feel hopelessly mired in low priority, dead end tasks. Whether that was an accurate perception or not, I viewed my lack of kernel work and the stopgap nature of my BML work as evidence that it was true and lost faith that I would ever accomplish something meaningful.

I became despondent.

It didn’t help that while I was on BML there were multiple rounds of layoffs, we were all forced to take a week of furlough (unpaid time off), and my friend and team/officemate Ryan Grimm left Austin so I was telecommuting all of the time. Morale was at rock bottom. I actively pursued other jobs, stopped taking my work seriously, stopped tuning in to weekly Oz synchronization calls in favor of family dinner, the news, or bedtime stories. I spent weeks on my own rewriting BML’s infrastructure from scratch in Python for vague reasons that amounted to keeping myself busy beyond whatever nebulous line items I was actually supposed to be pinning down on the budget.

SoftLayer

Underscoring the fact that BML members were almost always focused elsewhere, my entire 2015 was dominated by supporting SoftLayer, a cloud company IBM acquired in 2013 that still ran x86 chips almost exclusively. This had nothing to do with BML, and I was only tapped because I was in the right area (SoftLayer is based in nearby Dallas), had low level experience, and basically nothing else to do.

SoftLayer was a transformative experience for me. On one hand, even in a totally different role, I was still mopping up after OzLabbers that absorbed all of the main firmware/kernel work before I arrived. On the other hand, I was shoulder to shoulder with a small team for the first time since Prism five years earlier and it felt great.

It was our job to convince SoftLayer developers and execs who were openly hostile to our architecture that our systems could hang in their highly automated data centers. It was a tough sell. In fact, the first time I went to Dallas to work with SoftLayer, they were throwing a fit and wouldn’t even communicate with IBM enough to sit down to lunch with me until the 4th day when the cavalry had arrived.

Thankfully, things went smoothly afterwards. In the following months I worked closely with my fellow IBMers as well as SoftLayer’s team, I had multiple daily phone calls that weren’t complete wastes of time, I had VP level visibility in both companies, and daily status notes with the Director of my organization. Most importantly, it was up to me to re-implement all the parts of SoftLayer’s infrastructure that were Windows only, culminating in converting roughly 6,000 lines of C++ (with 15,000 more in templates and random dependencies grafted in) into a tight 250 lines of Python that I actually got to demo for execs. I’m not saying I’m a miracle worker, but it felt good to prove under pressure that I was a competent Linux developer that wasn’t going to let some “copy and paste from Stack Overflow” Windows types scare me off with a Visual Studio project that looked like byzantine dogshit but was actually just implementing a simple (idiosyncratic, undocumented) JSON API in the wrong language for the job.

For so long I’d felt worthless, doomed to work as a member of a team that was mostly management fiction, cursed with dead end tasks that only landed on me after every OzLabber available had passed on them, or there was nothing left but scutwork. I’d even become aware that fresh hires at OzLabs had become far more productive than I was with years of supposed experience and started to wonder if I just sucked and nobody had the balls to say it to my face.

After SoftLayer it was like waking up from a trance, or fitting the last piece into a puzzle. Something clicked. I no longer felt despondent, I felt confident. I realized that those fresh hires weren’t prodigies or supermen, they were benefiting from the same sort of close quarters team collaboration and effortless communication that I hadn’t felt in the years between Prism and SoftLayer.

It was at this point that I developed a creeping suspicion that my time at IBM was drawing to a close.

Pulling the Trigger

I returned to BML early in 2016 and attempted to keep working.

Sure enough I was assigned some P9 kernel work, and sure enough as soon as there was an issue with it, the patch was no longer mine because it was easier to fix ASAP than it was to spend another 24 hours sending messages back and forth. To add further frustration, that code also got reverted when support was dropped from the chip so even if I hadn’t screwed up I still wouldn’t have had a patch in anyway.

I became confrontational. The next time there was a thread about the future of the team, I let everyone know what I thought about the current state of the team, how the project was being obsoleted by OpenPower, how its remaining functionality didn’t take nearly as many developers, and how BML should either be refocused on my rewritten version or destroyed. I kept it professional, but I assume I still came off heated. Regardless, nobody gave a shit about my opinions. In retrospect I think I was expressing my frustration with the team more than I was expecting anything to change, but it would have been nice if someone had stepped up to defend BML from my criticisms even if my vision for the future of the team was weak.

I left BML shortly thereafter and was placed on the OpenPower team. Once again led by an OzLabber, isolated from everyone making decisions, bouncing between projects. Not having the BML infrastructure to worry about was a step in the right direction, and my team lead did the best he could to keep me busy, but at this point I was longing for a local team with a project where I could be self-driven instead of relying on others to mete out my weekly portion.

Ironically, this is where, after eight years, I finally got my one, single, solitary upstream Linux commit that was actually my own work, even if it was only a simple cleanup.

Anyway, 2016 was just a long series of confirmations that, barring a relocation to Canberra, IBM didn’t have the capability to provide what I wanted in an area I was interested in.

Blame, or lack thereof

So who do I ultimately blame for leaving? Well… nobody.

Despite my feeling overshadowed, OzLabbers didn’t do anything to wrong me. On the contrary, I learned practically everything I know about writing good C and assembly from reading theirs. They had never been anything but friendly to me and understanding when I fucked up. What should they have done differently to keep me at IBM? Not revitalized the platform so I wouldn’t be envious of their good work? Should they have ignored local talent to give me an even playing field? Should they have given me bigger chunks of work when I never proved I could handle the smaller ones? No, of course not.

I do wish that I had the opportunity to visit OzLabs to learn their workflow in their natural habitat. I think I would have made a better impression trying to work with them, rather than only meeting in Austin when they were dashing between meetings instead of kernel hacking. Perhaps then I would have gained insight on how to better work with them from Austin, or work more effectively on my own but I’m doubtful it would have mattered.

As for IBM overall, I can’t say there was much they could do either, short of keeping embedded PowerPC going with an Austin team.

For myself, I know I could have done better. There are other foreign developers that work just fine kernel hacking PowerPC, and maybe with a bit more skill, patience, experience, or even just flexibility to work on something outside of the base chip I could’ve been one of them. I certainly could have been more aware of what was going on, and less prone to spells of depression. I could have been more communicative, or more pro-active.

In my defense, most of the time I did the best I could with the information I had. Yes, I made dumb choices, I made naive mistakes, but a lot of this is only clear to me now with years of hindsight. It should be no surprise that 31-year-old-seasoned-programmer me would do 100x better than 22-year-old-college-grad me given the chance, so I try not to dwell on my failures after I’ve learned from them.

The bottom line is that as I matured as a programmer and employee, what I wanted from my team and employer changed drastically enough that I couldn’t be accommodated. There is no fault, just greater understanding, so while I very well could be making a huge mistake, I’m reminded of this recent xkcd:

xkcd: Settling

Some shout outs

At this point, all I can say is that I firmly believe OzLabs is filled with the best, most professional and accomplished engineers I’ve ever worked with. OpenPower is a huge leap for PowerPC and a major achievement for IBM’s server business. As it has been for more than 15 years, the architecture is in good Australian hands.

I also want to bid my North American BML peeps and SoftLayer invasion force brothers in arms farewell.

Oh and, hopefully for the last time, I just want to say “Fuck Lotus Notes.”

gaming, hardware, linux, software November 26, 2016 Jack No comments

Steam Link + Generic Gamepad + Linux Host

Like a lot of people this week, I picked up a Steam Link for $20 from Amazon and it arrived yesterday.

Previously, I’ve used Steam home streaming to and from Linux hosts and I’ve been very pleased with its performance, especially over a wired connection. I streamed Skyrim from a Windows partition elsewhere to my media box running Arch and it even worked on the nouveau driver, so it seemed like a safe bet to invest $20 in a Steam Link to function as a sort of bluetooth KVM switch, so I could stream games and movies from any of the hosts in the house. This post isn’t to review the Link, however, it’s to clear up exactly how generic controllers (i.e. not the Steam Controller) work in Big Picture Mode (BPM) and how to resolve a couple of Linux configuration snafus.

A note on generic controllers

When I bought the Steam Link, I read that it supports the Wii U Pro Controller. I’ve got a couple of those laying around since we have a Wii U and they’re pretty nice controllers. Supported, in this case though, just means that the Link will properly pair with the controller and it has a good mapping between the Wii U Pro Controller’s buttons and the X-Box 360 controller it’s emulating. That’s it. It’s enough to get the Link interface and the BPM interface to work well, and nothing else.

What that means is that you should ignore Big Picture Mode’s controller mapping config. Seriously, the most confusing part of this whole experience was discovering that all the neat mapping functionality is Steam Controller only. I mean, I knew I couldn’t do all the cool stuff a Steam controller can like profiles and mode shifting etc. but I thought I could at least change simple button presses. I guess I should have taken the hint that the mapping controller is a picture of a Steam Controller and the descriptions all mention “Steam Controller” by name, but it convincingly allowed me to mess with settings it knew would never take.

Despite that, the bindings are still active and will still change on context. Even though you can’t actually change the bindings yourself in the interface, BPM will still alter the bindings based on whether you’re in Desktop Mode (normal Steam), BPM, or a game. This means it’s crucial when you setup bindings in a game that it was launched via Steam and not any other way.

This threw me off quite a bit because I thought it’d either be “Steam can configure this device” or “Steam won’t touch this device”, not “Steam will pretend to configure this device, fail, but then configure it implicitly when you do something anyway.” The latest Steam betas will warn you that you’re configuring a different controller, but will still frustratingly pretend that it’s actually going to try to make it work. It’d be nice if Valve made this more explicit by forbidding you to change the mapping, instead of it silently reverting to defaults.

Fix constant display/video flickering on NVIDIA cards

The first real problem I faced was that connecting the Link to my Arch Linux system worked, but Steam in BPM would incessantly flicker. Not occasional frame tearing, but headache inducing strobe flickering. [UPDATE: Now that I’ve tested on better hardware, I’ve found that this option is necessary to fix the more subtle flickering of full motion video too. ] I’m not sure how to fix this on nouveau, but I switched to the proprietary driver and after invoking this command:

the flickering stopped. Note that you’ll need to customize the mode definition. HDMI-0 is the adapter, 1920x1080_60 is the mode (resolution and refresh rate) and screen offset. All of this information you can query from “xrandr -q” if you’re unsure. For example, here’s the xrandr output on this computer:

You can see the adapter name, the resolution, the offset (+0+0), and the refresh rate is listed in the current mode (60.00*).

You could automate this either by using nvidia-settings to write your Xorg conf, or you can just invoke this command once before you start steam on the host. Personally, using an XDG autostart compatible WM (like Cinnamon, or Openbox), I just have it in an autostart script so my xorg.conf is still (mostly) auto-generated.

Fix Big Picture Mode cursor confusion

The next problem that I faced was that Big Picture Mode had a big blue mouse cursor on it, a Linux desktop cursor, in addition to it’s highlight cursor and all of them were out of sync. It made it very hard to tell just what exactly you’re doing in the interface. Initially, I thought it was Steam’s bindings getting confused but it’s actually Xorg trying to be helpful. When you connect a generic controller to the Steam Link, it forwards the traffic over the network as a virtual X-Box 360 controller. Xorg sees that device and says “oh hey, a joystick device, I know what to do with that!” and attaches the mouse.

If we were plugging a controller into a standard Linux desktop, this would be a lifesaver, but since our interface is specialized to use the controller it’s just confusing. So, the solution here is to tell Xorg to stop.

After restarting X, this will keep Xorg from automatically connecting the mouse cursor to your gamepad stick and leave Steam alone.

That’s pretty much all I had to do to get basic Link functionality up and running when connecting to a Linux host.

politics November 9, 2016 Jack No comments

Trumpocalypse 2016

It happened. It actually happened. Trump is going to be the next President of the United States and I’m pretty sure the entire American public is sitting around looking at each other and feeling completely numb. Unless you voted for him, then you’re probably ecstatic that he overcame what appeared to be insurmountable odds.

Personally, I voted Hillary. Not because I think she’s a good person, or would be a good President, but because I thought an oligarch would be a better choice than a racist psychopath. That said, as someone that supported Bernie vehemently until long after it was obvious he’d lost the primary, I can’t help but think that Hillary and the DNC did this to themselves.

It was obvious during the primary, and in the email aftermath, that the DNC wasn’t interested in giving Bernie Sanders a fair shake. Not only were all of the rules and resources of the DNC focused on putting Hillary in the general, but also the travesty of superdelegates that honestly believed she was the best the party could offer despite the history of scandal, the appearance of corruption via the Clinton Foundation and just generally her lack of inspiration. Hell, her platform was an assortment of continuing Obama’s policies and adopting Bernie’s in an attempt to pander to the dissatisfied progressive wing of the party.

Even I could see a lot of problems with Hillary’s primary performance before she was the nominee. For example, Hillary absolutely dominated primaries in the south where, lo and behold, votes didn’t matter one bit yesterday. She didn’t carry a single southern state, not even Florida which was demographically in reach. She won many closed primaries that excluded anybody that didn’t identify as a Democrat (or didn’t identify in time). Those independents that were excluded went for Trump (CNN: 48% to 42%). Time and time again during the primaries, we saw the older contingent of those self identified Democrats carry Clinton to victory, but older people skew conservative (one reason they liked Clinton over Bernie in the first place) and appealing to Boomers isn’t a winning strategy for a Democrat.

After the primary, she failed to reach out to the young Bernie supporters and minorities of all stripes who are a lot of the same demographics that put Obama in the White House twice. The assumption was that they’d fall inline through fear of Trump and that ultimately proved fatal. The youth picked Hillary still (44 and under went Hillary 52% to 40%) but the minority vote didn’t split for Hillary like they did for Obama, and a lot of that is squarely on Hillary. She spent too much time trying to target white voters, particularly white women, to erode Trump’s base while completely ignoring her own. What happened to abuelita after she’d won the Nevada, Florida, and California primaries? Where was civil rights activist, Bible thumping Clinton after the South Carolina primary? Nowhere. Instead her campaign defined itself as not Trump.

Watching the general election debates, it was obvious that Trump didn’t really have a platform leg to stand on. He failed to give details about anything he promised to accomplish. Even now that he’s President-elect I don’t think anybody has the first clue of what an actual Trump administration is going to look like on day one. Clinton could have capitalized on this, but instead of inspiring people to vote for her by dismantling Trump piece by piece, she effectively looked at the screen and said “c’mon, seriously?”

And this is where everyone failed. Clinton, the media, most of of the populace completely underestimated just how much the American public is sick of establishment politics. This is why Hillary failed to win yesterday. She wasn’t fighting the fight she was prepared for… one based on policy and parties, like the one Obama fought against McCain or Romney and how Democrats have fought against Republicans in a hundred other races. She was fighting a battle for the status quo against an uncontrollable tide of destruction. She kept expecting to defend her policies and ideas from her opponent’s policies and ideas when in reality she needed to be arguing why the establishment shouldn’t be burnt to the ground as a whole.

In the most bizarre way possible, Trump played this masterfully. He understood, in a way that I think Bernie understood but articulated much differently, that this fight didn’t have anything to do with concrete policy as much as harnessing American rage to destroy a system that everyone (even Clinton) acknowledges is broken.

The next four years are going to be rough. Trump has it within his power to strip away Obama’s legacy, turn the Supreme Court into a conservative bastion for decades, and just generally ruin the American reputation on the international stage.

politics February 5, 2016 Jack No comments

On Bernie Sanders vs. Hillary Clinton – without talking about financing

It won’t come as any surprise to people that know me personally that I’m a Bernie Sanders supporter. I’m pretty far left and I have absolutely no problem looking toward Europe for examples of good government taking care of its people better than the US. I don’t think twice about labels like “socialist” because I know what that actually means and that Democratic Socialism been a successful model elsewhere.

However, in this write up, I wanted to give concrete reasons to choose Sanders over Clinton for someone that doesn’t want to predicate this decision solely on whether Hillary Clinton has been bought and paid for by the Washington machine because even though it’s obvious that money and lobbyists are corrupting our politics in general, it’s hard to prove that Clinton specifically is corrupt. The closest I’ve come is Elizabeth Warren calling out Clinton in 2004 about reversing course on bankruptcy legislation which stinks of corruption but is still just conjecture about Clinton’s own motivations. In essence, I’ll give her the benefit of the doubt that she’s a woman of honor and the fact that she takes corporate money and gives paid speeches to Goldman Sachs doesn’t compromise her integrity.

Judgment vs. Experience in Foreign Policy

Bernie Sanders voted against invading Iraq – twice.

It’s a bit annoying how much he flogs this point in the foreign policy debates, but it is important to me. Why? Because in 2003 I was a 17 year old boy and over the course of that war I saw men just like me go to Iraq and Afghanistan and come back broken, or not at all. Issues tend to hold more influence when it causes people that are just like you come back in flag draped coffins from an unjust war. This was a big reason I voted Kerry in 2004, and Obama in 2008/2012.

In the end though, it’s not even directly about the Iraq War as much as it is the difference between experience and judgment as Bernie mentioned in last night’s debate. Hillary has the foreign policy bona fides of being Secretary of State but that amounts to experience where her vote on the Iraq War showed a lack of judgment that was extremely costly for this country both in terms of dead soldiers and wasted money that all went to toppling a dictator that was the only thing holding back the factional warring we now see with ISIS. We traded lawful evil for chaotic evil.

Bernie, having no foreign policy experience but great judgment, voted against the Gulf War in 1991 on moral grounds only days after taking his first national office. Then, in 2002, when he had access to the same information as Hillary, he not only voted against the Iraq War, but also predicted the disastrous results including the fight with ISIS (the whole thing is good but his list of unanswered questions begins at 2:46 and his fifth question is “Who will govern Iraq when Saddam Hussein is removed, and what role will the US play in an ensuing civil war that could develop in that country?” at 4:30).

Now it could be argued that her 4 years as Secretary of State make just that big a difference. After all, this was 2002 when she had no foreign policy experience, and as Hillary rightly pointed out in the debate, one vote in 2002 doesn’t give you a strategy against ISIS 14 years later. Yet Clinton certainly didn’t think it was an issue when she was running for President in 2007 and 2008 with no foreign policy experience, and less experience in government than Bernie Sanders has now. And she certainly doesn’t have anything bad to say about Obama’s foreign policy (because it was hers) even though he didn’t have any experience.

Bernie has excellent judgment despite lacking foreign policy experience (just like Obama and Clinton in 2008). As President, Sanders will have the best advisers and intelligence in the world, but he still has to make the right choices, even if it’s in regard to something he’s not an expert on (like new threats, or unprecedented world events). I just don’t buy that having a ton of experience outweighs having the judgment to make the right call, and the judgment to surround yourself with people that do have experience.

The PATRIOT Act

In a similar vein, Hillary voted consistently to support the PATRIOT Act. When it first came up in 2001, and then again in 2006, and then again in 2015 when it resurfaced as the USA FREEDOM Act (which was just the PATRIOT Act without NSA surveillance because we found out).

Bernie, unsurprisingly, voted against it all three times.

I believe that this is another instance in which Sanders showed better judgment despite the mania of the time. Regardless of the size of the terrorist threat, there are certain freedoms that should not be infringed upon, like the 4th Amendment in the Bill of Rights (against illegal search and seizure). Ultimately, all the PATRIOT Act did was give the government license to monitor all of your communications with the barest hint of reasoning and without even a modicum of true oversight.

On top of that, it was rammed through Congress without enough discussion and debate. It was introduced on October 23rd, passing the House on the 24th (without Sanders’ support), and passing the Senate on the 25th (with Hillary’s support). The bill was 350 pages long and quite complex, I believe it’s highly likely that Clinton (like the 97 other senators that voted for it) rubber stamped the PATRIOT Act without even reading it, based solely on the prevailing fear of the day – again failing to exercise good judgment.

Capital Punishment

Capital punishment is utterly barbaric and has no place in an enlightened society. You can’t kill people to show that killing people is wrong. There’s too much opportunity for miscarriages of justice to occur because there’s no such thing as 100% certainty. Even “slam dunk” cases can always have new light shed on them, as we saw with the advancement of DNA technology in the last century, and as such we should always leave room for error.

A list of exonerated death row inmates shows that we exonerated seven death row inmates just last year who were convicted between 1985 and 2013. Each one of these people was wrongly locked up, which is bad enough, but it would have been immeasurably worse if their sentences had been carried out.

Hillary, at last night’s debate, said that capital punishment has its place, even if reluctantly, based on some draconian idea that really bad people deserve it. As if spending the rest of your life in a prison cell wasn’t an awful (and more just, and cheaper) punishment already.

Bernie Sanders is against capital punishment entirely, and his home state of Vermont has banned it since 1965.

Healthcare

The Affordable Care Act (ACA or Obamacare) is a disappointment. I’ve defended it in the past for only one reason: it’s a foot in the door. It gets everyone into the system and when everyone has a stake people are a lot more likely to care when opportunities for reform arise. In my view, the ACA ceased being a complete solution the moment the public option was dead because at that point it was just a way to force everyone into the for-profit insurance industry.

To be clear, Sanders voted for the ACA as an incremental improvement to a flawed system. However, he believed that it didn’t go far enough, wanting to supply Medicare-For-All in a single payer model. This is a proven model elsewhere in the industrialized world (like Canada, the UK, Australia) and would provide us with truly universal and free healthcare at the overall cost of a slight bump in taxes which would be offset by the savings from not paying for insurance.

When Sanders was helping to draft the ACA, he attempted to pass an amendment that would have converted the ACA into Medicare-For-All but was forced to withdraw it because Tom Coburn (R-Oklahoma) threatened to destroy the entire bill. Sanders vote for the ACA was merely not letting perfect (single payer) be the enemy of good (the ACA’s improvements).

Hillary’s only real argument against it (other than the “he’s dismantling Obamacare” which was hopefully debunked last night) is that it would force us into another politically contentious debate with Republicans… well what issue in either of these camps wouldn’t? Who believes that anything that they agree on, from gun control, to education, to infrastructure, to bank regulation would go through Congress without a fight? Nobody. So if we’re going to fight about everything, why not aim for the system we want instead of calling it good enough to avoid the heat?

More to the point, Hillary’s desire to incrementally improve the ACA is admirable, but incremental change will never transform the ACA into government run, non-profit, cost controlled, and free healthcare like we see elsewhere in the world. At best it would add further restrictions to the insurance companies, but as long as those same companies are out to make a buck it’s going to be impossible to get them to stop cutting corners, stop finding ways to exclude costly patients, and overall stop finding ways to screw the American public.

Marijuana

It’s 2016. There are states in this country that have legalized recreational marijuana and have seen positive effects from that legalization. Even more states have decriminalized marijuana or authorized medicinal marijuana. The time for Hillary’s approach – moving it to Schedule II (which includes drugs with “high potential for abuse, with use potentially leading to severe psychological or physical dependence” like Adderall or Vicodin, which is far more harmful and addictive than marijuana) and doing research for another 20 years – is long over. Moving marijuana to Schedule II merely opens the door for national medicinal marijuana rather than actually dealing with any of the real issues.

Even if she won’t commit to fully legalize marijuana it’s well past time to recognize that locking people up for marijuana is a travesty and decriminalize it. The idea that we should ruin someone’s life by locking them up and giving them a police record to keep them from ruining their life by smoking marijuana is absurd, especially when it clearly hasn’t worked as a deterrent.

In the end, legalization of marijuana is a win for everyone. Not only does it legalize activity people are taking part in anyway (fueling black markets with unregulated access to all other drugs), but legalization means that Americans can start making money on marijuana instead of the Mexican cartels that are happy to live like warlords on American dollars.

With government taxation, and the reduction of the load on the justice and penal systems, it would even be a net gain for the government itself (as it has been in CO, AK, WA, and OR). It’s rare that libertarians and liberals can agree on an issue, but there is wide agreement on this from everyone that can look past the War on Drugs rhetoric.

Now Bernie hasn’t come out and said “I will legalize marijuana” but he supports ending the federal prohibition on marijuana (legalizing it without overriding the states) which would make it a lot easier for marijuana based businesses to use federally backed services like banks and operate without the fear that the DEA will come knocking. Not to mention it would remove one more concern for states that want to legalize. I would prefer it if he was totally pro-legalization, but his position is still miles ahead of Hillary’s.

Other Concerns

Again, without bringing up Wall Street or campaign finance, there are some other concerns where the margins are a little slimmer between the two candidates.

Gay Marriage

In 2004, Senator Clinton defended the Defense of Marriage Act (DOMA) arguing that marriage was between a man and a woman and only served to raise children. In 2007, candidate Clinton supported civil unions and marriage was a states issue. In 2016, as a candidate again, she supports marriage equality… now that it’s overwhelmingly popular, has been upheld by the Supreme Court and requires no action only continued defense of the status quo.

Sanders voted against Don’t Ask Don’t Tell (DADT) in 1993. Defended gay soldiers in 1995. Voted against DOMA in 1996. Supported Vermont civil unions in 2000, and Vermont gay marriage in 2009 before it was legal nationally. The only blip in his support was in 2006 (while in the House) when he suggested that it wasn’t time to push for marriage equality because of the contentious nature of the 2000 civil union decision that proved extremely divisive in his home state. I’m inclined to let that pass considering it was pragmatically motivated rather than any sort of prejudice, there wasn’t any legislation actually on the table at the time, and the state politics of Vermont were outside of his arena as a member of (the obviously national) Congress.

The reason I don’t put much weight on this point is that both candidates are pro-gay at this point. Yet I think it’s worth bringing up because either Hillary truly evolved on gay rights (in an arc suspiciously matching public opinion year by year) while Sanders was pretty consistent over 20 years… or she was willing to support legislation she didn’t believe in (DOMA) to back injustice. Neither of these possibilities are particularly flattering to Clinton.

“Establishment”

Clinton is 100%, totally unarguably an establishment candidate. There’s a reason that, before even one vote had been cast, she had collected 320 super-delegates and was considered the inevitable nominee the instant she announced. Personally, I don’t care about the “establishment” label, but her insinuation in last night’s debate that she can’t be an establishment candidate because of her sex was indefensibly sexist and completely irrelevant to the issues.

Scandal

I also don’t get this argument that somehow Clinton’s high profile career means that she’s had all of her dirty laundry aired and that gives her an advantage over Bernie. Not only is there currently a minor scandal on her end that has taken too much time out of this campaign (emails), but Bernie has been in public office in various forms since 1981. Sure, he wasn’t in the White House but over the last 35 years hasn’t he faced a lot of public scrutiny? Wouldn’t every person he ever faced from being mayor of Burlington, to Representative, to Senator want to find some dirt on their idealistic and high minded opponent?

Conclusion

In the end I believe I’ve laid out a case for Bernie Sanders to be my choice for President, without touching on nebulous claims of corruption through campaign finance. If you agree, disagree, or have corrections feel free to use the comments.

software March 13, 2015 Jack 6 comments

On Bspwm Tweaking

I’ve written before on my travels through the tiling WM landscape. It’s been awhile though.

My most recent discovery is bspwm, which is a tiling WM that mixes automatic (think Xmonad) and manual (think ion3/notion) tiling as well as a hands off, but play nice approach to other desktop necessities like status bars and trays.

Why bspwm?

  • It’s Minimal. Similar to Xmonad, bspwm does exactly one thing, but does it extremely well. It places windows. It doesn’t have a built-in anything. No trays. No status bars. No menus. It doesn’t even directly handle keybinds thanks to a companion program by the same author (Bastien Dejean) call sxhkd or Simple X HotKey Daemon which is a flexible tool to map keybinds to simple command execution similar to xbindkeys but better. In my experience, built-in extra-features of WMs are often lacking the amount of flexibility I want, so no loss there.
  • It’s Scriptable. Bspwm has a companion program, bspc that can be used from the command line, or a script, to accomplish any action bspwm is capable of. In fact, bspc‘s so complete that all of the default sxhkd binds are bspc commands and the bspwmrc is nothing but a shell script calling bspc to set configuration options, along with whatever other startup stuff you wish.
  • It Communicates. It’s a simple affair to extract information from bspwm, both via bspc and via the status FIFO that can be used to receive notification of bspwm internal events.

To summarize, bspwm fits into the sweet spot where it has the minimalism of Xmonad, trading out the Haskell for shell scripts and a whole lot of scripting potential.

My Setup

First an obligatory double-wide screenshot.

2015-03-12-000528_3840x1080_scrot

Very simple, but there are a few upgrades from the examples in the git repo, despite the fact that the color scheme is the same.

  • Easy Named Desktops. I’ve written a couple of helper scripts around bspc and dmenu that make it simple to create, switch to, rename, or destroy named desktops.
  • Enumerated Desktops. They’re also numbered along so that bspwm’s default keybinds are more easily used along with their descriptive names.
  • Double Monitor Status Bar. I’ve split the monitor/desktop output of the status bar so that each monitor’s information is displayed on it, rather than all on the left side of the bar.

  • Keybinds for Focusing/Sending to Monitors. A simple job for bspc, I was surprised there was no default bind for it, but these binds allow you to shift windows and focus from one screen to another without knowing which desktop is on which.
  • Battery Monitor. I added a simple battery monitor on my laptop host.
  • A Tray. I’ve configured stalonetray to blend into my status bar.
  • Simplified Files. I much preferred having the panel configuration all in one file, instead of three (or four, if you count tweaking your .profile).
  • Hostname Tweaks. Tweaks to make my single config work identically between my desktop and laptop.

Named Desktops

Bspwm natively supports named desktops, but they’re cumbersome to use since you have to use bspc from somewhere. The example config spawns 10 desktops by number and calls it good. Well, coming from notion (a fork of ion3), I got used to the ability to name desktops something useful to remember what the hell I was doing on them, and then shift between them easily.

As such, I’ve added two scripts, which are trivial wrappers around dmenu and bspc

I put these into my $PATH someplace, I use ~/bin for all of my custom scripts.

Then, in sxhkdrc:

This allows you to use Super + d to create a new named desktop, or switch to it if it already exists. Super + ctrl + d will rename the currently focused desktop. Finally, Super + alt + d will destroy the focused desktop (but only if it’s empty and not the only desktop on its monitor, which is a restriction of bspwm).

Panel Improvements

I’ve combined the various panel example files into a custom single file that’s still run from bspwmrc.

This covers the enumerated desktops, split monitor status, tray, and hostname tweaks I mentioned above.

Here’s the full panel script, commented with improvements:

I can’t really explain any better than the comments, but I think I’m using a decent setup that will allow for future additions to the panel pretty easily just by mimicking the date and battery status functions and their invocations.

Monitor Binds

I simply added this to my sxhkdrc

This follows the Xmonad convention, where Q and W represent left and right monitors respectively, so super + q focuses the left monitor, super + w the right monitor. If I had three monitors, I would use QWE for Left, Center, Right.

The second set of binds sets up super + shift + q/w to send windows to a specific monitor’s desktop, and super + alt + q/w to shift an entire desktop to another monitor. I find bspwm’s desktop focus to be a bit wonky with multiple monitors (focusing a desktop will focus it on whatever monitor it’s associated with, rather than the current monitor), but I still only rarely have to shift desktops between monitors.

In order for these binds to work correctly on multiple hosts, in bspwmrc I added this block to rename my monitors consistently across machines.

This also clears the desktops, down to a single unnamed desktop on each monitor.

Download

You can grab my config files here.

This includes everything you should need to run bspwm with my config. It untars such that the bspwm-config directory is like your HOME, so the config files are in .config, and won’t be visible by default.

The only caveat is that you’ll need to put the bin files in your $PATH someplace. Like the example config, it also expects that you have xtitle and bar-ain't-recursive installed.

If I end up making any more significant changes, I’ll consider putting this up on my Github, but for now I’m pretty content with my setup and don’t feel the need to version control it.

books February 2, 2015 Jack No comments

On “Use of Weapons”

I have been positively binging on Iain M. Banks’ Culture series. I actually wrote about Consider Phlebas, the first book in the series, a few months ago. Since then, I read The Player of Games and now I just completed Use of Weapons.

Spoilers ahead, of course.

First, let me give Banks a posthumous “I see what you did there.” The novel makes a point. The title is apt. Zakalwe (which I’ll use by convention to refer to the main character) is a great commander and master manipulator, perhaps the ultimate weapon himself. The book did a good job of conveying how he became so tortured, mercenary and ambivalent even while maintaining his drive for redemption.

I also appreciate that the book was very ambitious in its structure, with the reverse chronology of the historical storyline. I found this to be initially extremely confusing, but only because I think confusion is inherent in the execution of such a structure, rather than Banks’ execution being flawed. That said, I don’t think the unconventional structure helped tell the story effectively. Chapters of the book were sort of shoehorned so that the “twist” could occur on the final page of the work proper, but it left a lot of the flashback chapters deliberately vague, and – on first reading – utterly boring or nonsensical. I understand now, in retrospect, how these chapters related to the theme but as I was reading, and under the impression that Zakalwe was Cheradenine and not Elethiomel, they seemed to drag on and hang there, disconnected from the overall narrative.

For example, let’s dissect Zakalwe’s chair-phobia. There are three distinct interpretations of it, that develop as you read.

The first reasoning you find is that Zakalwe discovered Elethiomel having sex with Darckense in a chair in the summer house. This barely makes sense with the amount of fear of chairs Zakalwe shows, especially since it’s consensual sex and Zakalwe did nothing to stop them. It makes Zakalwe destroying the summer house seem like some gross over-reaction. This interpretation also makes the Chairmaker Darckense, which makes Zakalwe’s obsession with the Chairmaker seem a bit half-baked.

The second reasoning, that emerges in the last few chapters, is that Elethiomel made a chair out of Darckense’s bones. That definitely backs up the averse reaction to chairs. It also changes the motive for the summer house destruction to being reminded of the incident, a personal betrayal, as Major Zakalwe (Cheradenine pre-bone-chair) fights a war against Elethiomel, the Chairmaker, who we now know explicitly is the enemy they’re fighting.

The final reasoning, that drops with the last page of the book proper, is the Zakalwe is Elethiomel and not Cheradenine, Major Zakalwe’s (Cheradenine’s) actions are not those of the main character, and Zakalwe fears chairs because it’s a reminder of what he did to Darckense. Zakalwe is the Chairmaker.

Now, giving credit where credit is due, it’s a feat that the novel has three retroactive explanations of a single trait. However, the first explanation, that stood for 75% of the book, was poor – mostly because it’s really hard to justify an otherwise normal person having a terrible fear of chairs, and an obsession with his sister, the supposed Chairmaker. It seems clear that Banks designed the scene around the chair to put this interpretation forth (chair sex, plus filling us in on who made the chair which was irrelevant to the other two interpretations), so it’s just a kind of lame placeholder for the real reason that comes later.

Banks relates sex and war (a common theme in literature and music) so you could also view the summer house scene as a foreshadowing of Elethiomel’s violence against Darckense and the presence of chairs, but in the same chapter in which sex and war are related, nothing bad happens to Shias Engin (the woman that Zakalwe nee Elethiomel has sex with), so you could be forgiven for not jumping to conclusions when trying to determine whether that’s just a fanciful metaphor or a hint 100 pages later.

Another side-effect of this story-telling mechanism that I thoroughly disliked is just how many characters are introduced only to be forgotten. When you’re warping around time and space, it’s hard to give any characters a solid conclusion, much less minor ones added for exposition’s sake. The focus is clearly on Zakalwe and his humanity, his inhumanity, his struggle and how that relates to the Culture’s use of Zakalwe (war) to achieve its goals. Unfortunately, that focus is all consuming and the other plotlines are discarded entirely. Compared to the previous two Culture novels, in which the geo-political situation was clarified and almost every named character has an end, this was a disappointing departure. We never know the outcome of any number of Zakalwe’s exploits, even the one that was an integral part in his present-day storyline. I would have enjoyed more of Zakalwe’s back story if the vignettes had been more than just ways to advance information about how they had fucked with his mindset – even if having other meaningful outcomes is contrary to the overall message of bleak moral quandary.

Intellectually, I think I grasp the novel and why he made these choices I disagree with… and yet, as someone that reads for entertainment I have to ask “At what cost?” The narrative was tortured by the structure, and I believe that a better work could have been formulated from the bones of this story and a more conventional approach. Use of Weapons is not, by any stretch of the imagination, a bad book. Ambitious, yes. Flawed, maybe, but not bad. It’s hard to call it one-dimensional, but the description feels right. If not one-dimensional then perhaps fatally focused on getting across its heavy message.

television December 10, 2014 Jack One comment

On Sons of Anarchy

I’m not a huge fan of Sons of Anarchy. I got pulled into it with the young gangster conflicted about the violent life he was born into, trying to move on and make an exit, go legit. While the series was focused around that, it was good. Plenty of crime, violence, drama and internal conflict to spend 45 minutes watching it and be entertained. It effectively jumped the shark when Jax took over the club, and his plans to leave the life of crime ended (which was around the end of season 4).

From there, it’s decline wasn’t precipitous, but it was steady. Harold Perrineau was interesting and turned in a great performance as Damon Pope, the Lawful Evil businessman and it was great to watch Jax frame Clay for Damon’s murder, effectively killing two birds with one stone. Yet the die had been cast, and President Jax was never as strong a character as VP Jax under Clay. Jax failed to extricate the club from crime. Tara began to accept her criminal life, went to jail and became a copy of Gemma who by season 5 was intolerable to watch. Even in a crazy season 6, the lure of getting the kids out of Charming is enough to drive the plot and convince Tara and Jax to betray the club and serve a jail sentence just to break the cycle.

But this season… this season was terrible.

The first problem is that a lot of characters we were invested in are dead before the season even starts. The Jax – Tara dynamic is gone (she’s dead). The Jax – Clay dynamic is gone (he’s dead). Opie is dead. Juice is discredited. Eli, who was an excellent foil for Jax, is dead. Gemma has already long since become evil. The rest of SAMCRO have become paper cutouts.

The second problem is that Jax apparently has no brain. He burns everyone around him without any proof. He burns Lin, which burns Marks, and burns the Indian Hills charter with nothing but a whisper from Gemma. His mother, yes, but also someone that he knows isn’t trustworthy from the get go (anyone else remember Jax convincing Gemma to backstab Clay after she got high and nearly killed the boys in a car crash?).

And honestly, trustworthiness aside, how could you not suspect Gemma being the murderer? She was there and saw “an asian guy” escape. Jax is aware of Tara’s plans at the time of her death and knew that Gemma would have a problem with Tara leaving with the kids. Gemma actively encouraged Jax to kill Clay (and is in general a bad bitch), so it’s not like murder is somehow out of bounds for her. How does this not instantly cause suspicion in Jax? Well, he’s lost quite a few brain cells this season, apparently.

Even if you choose to believe that Jax’s loyalty would prevent him from suspecting his own mother killed his wife, why then blow up your whole world to exact vengeance on an a random asian guy without a shred of proof? Sure, Jax can’t go to Lin and ask politely if he had Tara killed, but he could definitely tap his police contacts (Unser, Patterson, Jarry – and speaking of tapping, Chibbs’ relationship with Jarry is unbelievably dumb) and find out, surprise, that guy was in a Vegas drunk tank, off the official record. He probably could have found this out if he’d taken the time to listen to the asian guy before brutally murdering him too, but I can understand why you wouldn’t want the distraction of talking in the middle of your ritual killing (I read that in an etiquette book once). The point is, this information isn’t exactly top secret and there are probably a lot of other ways you could debunk Gemma’s assertion without going to the Chinese, at which point it either becomes obvious that Gemma is the murderer, or that you at least need to spend more time to find out who is.

I want to make one more thing clear, as a lot of internet comments seem to be under the impression that all of this poor reasoning is due to Jax being destroyed by Tara’s death and that vengeance doesn’t have to be rational. I’d buy that, if it weren’t for the fact that Jax’s revenge takes the form of a plot to destroy Lin in retaliation. Plans that are laid out and executed over weeks and months, it’s not like Jax rolled out with a shotgun bent on revenge the night he found out Tara was dead. You can’t excuse shitty writing with blind vengeance. As it stands, Jax could apparently spend a huge amount of time and effort planning his revenge, put his life and the lives of his crew on the line, but not spend a 20 minute phone call to his police contacts, or some discreet inquiries on the street about whether his revenge made sense. Where is the cunning gangster that required proof of Clay’s misdeeds and then so perfectly orchestrated his downfall?

Beginning season 7, everyone knew that the season would hinge on how Jax dealt with the situation and he did poorly. It took more than half the season before Jax even questioned whether Gemma was full of shit. It took Abel overhearing Gemma saying something really dumb to Thomas for Jax to get it right. Seriously, who confesses a murder aloud, in a house full of people, to a toddler? It could’ve been anyone at that door. Wendy, Jax, a club member. At least Gemma started the show as a dumb spiteful bitch so she didn’t have to fall very far for her part in this season.

Which brings me to the finale itself. On the heels of Jax killing Gemma, Unser and indirectly Juice in what probably should have been the first 45 minutes of the finale instead of an independent episode, Jax basically spends all episode abandoning his club, abandoning his children to an ex-gangster and his ex-junkie ex-wife, and throwing his life away for no good reason. The writers desperately tried to shoehorn in some Christ imagery and Shakespeare to fool you into thinking the story has depth, but SOA’s finale was DOA.

Okay, okay, so maybe in the process of realizing that he’s gotten a lot of people killed and imprisoned pursuing vengeance on the wrong people he’s decided that he can’t be allowed to live. Somehow, he believes emulating his father’s murder makes some sort of point (what point? I have no idea). Yet, how would Jax’s life be different if he’d actually been raised by a father? How would the MC be different if JT was still in charge? When Jax destroys his own notes and his father’s notes, it’s because he doesn’t want his children to follow in his footsteps, but he fails to realize that children need good parents more than they need to be insulated from the evils of the past. Jax could be so much more effective if he actually manned up and raised those kids as a loving father. Everyone knew that Jax was going to die in the finale. It’s a cliche at this point. A trope. I just find it hard to believe that this hardcore gangster did it to himself. Then again, if I lost 50 points of IQ over the course of an off-season I might off myself too.

The only redeeming feature of Jax in this episode is his murdering Barosky and Marks and killing some of the Irish to set things right. But these are almost entirely independent to the rest of the plot. He walks away from both murders. At this point, a highly connected top level gangster like Jax could just make a bid to disappear. He could go to Nero’s farm with his kids. He could do anything yet he’s only apprehended when he, like a moron, decides to talk to his dead dad’s marker on the side of the road. That’s right, a suicidal gangster talks regrettingly to his murdered gangster dad and yet fails to realize that maybe he should allow Abel and Thomas to avoid the same situation by being there instead of being another stop on the Teller Highway Death Tour. I was disappointed with this bridge from Jax’s vigilante justice. If it was more intense, in perilous flight from the police, trying to reach a safehouse and eventually the boys I would have felt for him and felt that he was still attempting to honor that initial impulse from way back in season 1. Instead, it’s a flaccid and utterly baffling mirror of his dad’s accident, starting from it’s endpoint (oooh, symbolism).

The final scene is a weak “car chase” that’s really just a 30 mile an hour police funeral procession, some really terrible CG crows, and Jax decides to give the grill of an oncoming semi an up close inspection. What. The. Fuck.

To summarize, Jax finally puts someone else in charge of the MC, escapes club justice, tidies up the loose criminal ends, escapes real justice only to allow himself to be caught so… he can have witnesses to his suicide? If he didn’t decide to kill himself in the most terrifying painful way possible he could’ve gotten away with it and moved on with his life. No such luck.

I hated this season and this episode was garbage.

My only consolation is that SOA won’t be back and I don’t have to hear one more fucking awful Katey Sagal cover, or another butchering of Queen or Nirvana set to slo-mo shots of random gangster shit.

books, scifi October 1, 2014 Jack One comment

On ‘Consider Phlebas’

I’ve read my fair share of science fiction. I wouldn’t consider myself an expert, but when it comes to sci-fi (and most fiction for that matter) I’ve realized there are two major components. There are the ideas and the execution of them.

For example, Asimov was prolific and his stories were very good, but Asimov was an ideas kind of guy. His plots in The Foundation series, and the Lije Bailey stories in the Robot series were interesting because he envisioned a world that was quite different than ours and intricate enough to hang a good plot on, but when it comes down to it, the man wrote functionally. He conveyed his meaning, and you are interested in that meaning, but in the end I would characterize his style as austere. Very imaginative, but very plainly executed. Dry, even.

There are many titans of science fiction that are similar. In fact, I’d say that if you have great ideas and write sci-fi, it’s really not a burden to be lacking in prose. Herbert’s Dune, Orwell’s 1984, Bradbury’s Fahrenheit 451. It’s not that they’re poorly written, it’s that they’re classics because of their ideas, or their satire rather than their language.

And of course that’s not to mention the raft of… lesser works out there. The works of Crichton, for example. Jurassic Park is a classic, but I’d put that more on Spielberg than Crichton. Crichton’s work in Congo and Sphere are fun reads, but nothing to write a thesis on. Timeline read like it was a screenplay-in-waiting. Fantasy, which is so often lumped in with sci-fi (and there is a blurry line in between) is rife with successful authors trading on ideas rather than execution. George R. R. Martin’s books are well conceived, but you read them to find out what happens next and not for the pleasure of reading them. Tolkien, who is the grandfather of all modern High Fantasy, is one of the worst offenders in this regard but he was a master of epic mythology and linguistics more than an author.

I bring these instances up not to shame these authors, in fact I’m a fan of all of them, but to note just how rare it is to find a really great author outside of “literary fiction” that trades in not just ideas, but also in writing that packs a punch and doesn’t shy away from being stylistic.

When I cracked Consider Phlebas, first of Iain M. Banks’ Culture novels, I admit it was with trepidation. The term “space opera” gets thrown around a lot in a pejorative light and, quite frankly, I’m not really one to dig novels that are basically episodes of Star Trek or rehashes of Star Wars. I imagined that it would be yet another band of heroes fighting against an evil galactic empire, or some opposing alien (*yawn*) force. In the first chapter, trying to absorb the names alone made me fear that I’d started reading The Lord of the Rings in Space.

But I couldn’t have been more wrong.

Consider Phlebas has everything. A compelling plot, believable relationships, well thought out action, clever (but not trite) dialogue, fantastic locations, realistic tech, vast scale, suspense, surprise, philosophy. I could go on. The best part of it is that Banks’ style rings clear and true from the first page to the last.

It’s been a long time since a novel has been able to really get me to picture each scene like Consider Phlebas did. The writing was never awkward, never confusing and yet extremely evocative. Amazingly, this includes feats like describing life on a massive Orbital that’s inherently beyond the sort of day-to-day experience we have in the 21st century. And it’s not just describing its shape or dimension or the people on it well, it’s making you feel like it’s a real place. That the people that live there are three dimensional and not just background noise of some boring plot point.

It’s similar with the technology, where Banks really went above and beyond. A lot of really great work (like Gibson’s Neuromancer, another of my stylistic favorites) benefits from the fact that it takes place in the near future. Things are different, but also the same. Banks had no such help, and yet even when detailing things that are literally only in the realm of sci-fi it has that tinge of truth behind it that lets your brain accept that such a thing is not only possible, but even likely. At one stage, he spent a few paragraphs describing just what it would look like looking out of the window of a spacecraft in hyperspace. It’s well trod ground between Star Wars’ hyperspace (all the stars turn into lines!) and Star Trek’s warp speed (the ship disappears into a point!) and likely touched on every single “space opera” between here and Jules Verne. And yet, Banks didn’t just cop out with a single sentence (“They went to hyperspace and all the stars turned into lines!”), he crafted a beautiful scene and included details of what would be seen, and how it relates to real space and celestial landmarks.

The final point I’ll mention in what should hopefully read as a ringing endorsement of Consider Phlebas is that between all of the wonderful metaphors and descriptive language, there is a lot of action and it is all well written. So often in text the excitement is dulled by awkward phrasing or poorly paced or ordered action sequences so I was pleased to find that even in the midst of battle I was able to easily follow what was going on without getting confused and having to re-read or getting bored with minutiae. The whole book flowed from static scene to dynamic battle and back again without skipping a beat.

If I had to register one complaint about the book it would be that it was too short, but even that criticism would only be a joke to underscore how much I enjoyed it.

If you’re a sci-fi reader, like me, that appreciates a little more weight in your worlds then you owe it to yourself to give Consider Phlebas a read. It is a masterpiece.