TL:DR A daemon to handle the LEDs on the LincStation N2 NAS

I normally run backups on my network on a SmartOS server equipped with 2x14TB mirrored hard drives running ZFS, using rsync and SMB/CIFS as the access protocol (and eventually Kopia). It’s a bit noisy and sluggish, despite having a 10G Ethernet connection, so I recently bought a LincPlus LincStation N2 all-flash server during its Kickstarter to eventually replace it.

The N2 is a very slim device, has 10G (copper) networking, supports 4x M2 SSDs plus 2x SATA 2.5" drives (I suppose they could be spinning rust) and a 128G MMC boot drive so you don’t have to waste data drives on the operating system. The only thing missing, really, is ECC memory.

It is supplied with a trial version of the semi-open Unraid OS, which I suppose is better than the proprietary QNAP QuTS Hero OS or the execrable btrfs-based Synology OS (Synology is beyond the pale because they are going to force their NAS appliance users to use only their own-branded and marked-up hard drives).

I know how to run a Linux system and I don’t need the limitations and hand-holding of a NAS appliance OS, so being able to install my OS of choice, in this case Alpine Linux, was non-negotiable. I managed to, after some fiddling because the device does not honor the BIOS boot settings or even boot off a USB drive until you physically remove the Unraid USB drive plugged into the first M.2 bay using pliers.

Once Alpine was installed, it was fairly smooth sailing, while the 4x4TB Lexar NM790 SSDs are throttled by only one PCIe lane each due to the limitations of the Intel N100, it’s still plenty good enough to saturate the 10Gbps Ethernet connection. My main home server (HP Z2 Mini G4) took this opportunity to die after 6 years of loyal service and the interim replacement I put in place only has 1Gbps, you can feel the difference. I have a Lenovo ThinkStation P3 Ultra on order as a permanent replacement, the only current SFF workstation available with both ECC RAM and 10G Ethernet.

One thing I noticed, however, is that all the status LEDs on the front kept flashing instead of doing so only when a drive or the network sees activity. To make this extra annoying, they are in my peripheral view in my home office. Peripheral vision is of course extremely sensitive to flicker and movement, survival traits when our ancestors in the savanna needed to be on the lookout for predators sneakily approaching. I had to do something about it.

On investigating, it seems they are SMBus devices controlled in closed-source software included on the Unraid flash drive, but also readily available on GitHub. Some reverse-engineering with strace and Hopper Disassembler showed it is written in Go, and constantly spawning subprocesses using the i2c-tools i2cget and i2cset commands, which is quite inefficient.

I first rigged up a simple shell script to just turn off all the LEDs:

#!/bin/sh
# https://gist.github.com/aluevano/ca6431f4f15d8ea62df57e67df7d4c3d

# SATA 1
i2cset -y 11 0x26 0xB0 0x04 # white off
i2cset -y 11 0x26 0xB0 0x08 # red off 
i2cset -y 11 0x26 0x52 0x00 # blinking off

# SATA 2
i2cset -y 11 0x26 0xB0 0x10 # white off
i2cset -y 11 0x26 0xB0 0x20 # red off 
i2cset -y 11 0x26 0x54 0x00 # blinking off

# Network
i2cset -y 11 0x26 0xB0 0x40 # white off
i2cset -y 11 0x26 0xB0 0x80 # red off 
i2cset -y 11 0x26 0x56 0x00 # blinking off

# NVMe 1
i2cset -y 11 0x26 0xB1 0x01 # white off
i2cset -y 11 0x26 0xB1 0x02 # red off 
i2cset -y 11 0x26 0x58 0x00 # blinking off

# NVMe 2
i2cset -y 11 0x26 0xB1 0x04 # white off
i2cset -y 11 0x26 0xB1 0x08 # red off
i2cset -y 11 0x26 0x5A 0x00 # blinking off

# NVMe 3
i2cset -y 11 0x26 0xB1 0x10 # white off
i2cset -y 11 0x26 0xB1 0x20 # red off
i2cset -y 11 0x26 0x5C 0x00 # blinking off

# NVMe 4
i2cset -y 11 0x26 0xB1 0x40 # white off
i2cset -y 11 0x26 0xB1 0x80 # red off
i2cset -y 11 0x26 0x5E 0x00 # blinking off

But this is throwing the baby with the bath water, and losing status indications. I decided to write my own replacement for the LincPlus LED daemon. To save time writing tedious C boilerplate, I asked Claude to write it for me (so am I a systems vibe coder now?). The prompts were:

  1. Using the docs at https://gist.github.com/aluevano/ca6431f4f15d8ea62df57e67df7d4c3d as a guide, write a program in C that uses SMBus/I2C to set LEDs based on disk utilization on sda, sdb, nvme0n1, nvme1n1, nvme2n1, nvme3n1 and network activity
  2. Set MAX_I2C_BUS to a higher value, on my system, the bus found is 11
  3. Use SMBus calls rather than plain I2C whenever possible

Claude did a surprisingly good job. I had to make the following adjustments for it to work properly:

  • Make it use SMBus rather than raw I2C calls to check if the bus supports the LincStation LEDs device 0x26 and avoid false positives (done using prompt 3).
  • Change the code to say utilization is 0% and not 100% if io_time is unchanged.
  • Make the network activity check only eth0, if you are running Docker or LXC, there will be other virtual or bridge interfaces that don’t actually generate outside traffic.

The most impressive thing was how it was able to reverse-engineer the gist and infer a structure to the I2C calls.

All in all, this took about 4 hours’ work including the (non AI assisted) research into how the LEDs work and some attempted reverse-engineering of the LincStation daemon using Hopper Disassembler.

Here is the result: https://github.com/fazalmajid/linstation_leds. I will probably have to make some tweaks as the network activity LED is now solidly on, due to background activity on the network.