Narrowing Down NAS Hardware Specs
ARM vs N100

My NAS is currently a Supermicro X10SDV-4C-TLN2F with 128GB DDR4 registered ECC. It was expensive to build and it was way, way overkill. This is running proxmox, but all the containers were moved to two Incus instances on two HP EliteDesk 800 G4 Mini. The old 128GB server no longer has any containers running, it simply hosts an NFSv4 export on a ZFS mirror for use by the two Incus nodes. Normally, I would peel out the PVE packages, remove the PVE repos and be done with it.
However, this server uses 1) a Broadwell-era Xeon D1521 cpu, and 2) 2x10gbe network interfaces. The package performs well, but consumes 25W just sitting there, even after BIOS updates to C state settings and OS-level C state tweaks.
RISC
I'm a fan of RISC architecture. It's a combination of my experiences running several SUN Sparc and PPC machines to host server services, my drive towards more power-efficient builds, and some mild PTSD from the days of the long Pentium 4 instruction pipeline causing very dumb performance issues. Motorolla PPC and old Sun Sparc are long gone from modern support and security update streams. The same is true of IBM Power architecture hardware; it's just old power-hungry gear on eBay now.
There are, of course, modern RISC alternatives:
- RISCV is definitely moving forward quickly, and it's very tempting to set up shop in that camp, but even I have a threshold for how much manual intervention I want to commit to projects that are supposed to just work, and a NAS is definitely that. That might change in as little as a year, when instruction sets are agreed upon, and maybe I'll revisit RISCV at that point.
- There are loads and loads of MIPS custom CPU implementations, like in my Synology DS211J backup device, but these are all over the place in terms of support. I would like a platform that will likely get support and updates for the next few years, at least. Currently, getting most MIPS systems to use a standard software stack like Debian would entail a making a janky JTAG cable and probably buying a second system after I brick the first one writing the wrong values on a CMOS or ROM partition.
Compared to RISCV, ARM64 is a lot more mature. Not simply because of the billions of Apple and Snapdragon SOCs, but also because manufacturers like Raspberry Pi and Radxa have understood that hardware is one half of the equation: if you also offer software support like drivers and images, you get a lot more traction with adoption. Radxa also happens to beat the pants off Raspberry Pi in terms of performance/$.
Intel N100
On paper, it's pretty compelling to get an N100 mini-itx with tons of SATA connections and just drop it in where the SuperMicro board is.
I did end up getting an insanely cheap N100 mini pc and put an ASMedia ASM1166-based NVME 2280 SATA controller in it. It's workable, and NFS performance suggests it will be fine as a stand-in when that DS211J dies, but its power consumption is disappointingly high. This has been noticed in many N100 NAS builds lately and was discussed on Linux After Dark recently.
Ultimately, I have some concerns about the quality of Intel's hardware and software presence, especially after their sacking of a huge chunk of their Linux driver engineers.
New NAS Specs
In my quest to replace a power-hungry, multi-purpose device with a power-efficient, single-purpose machine, I used the following criteria:
- Must be ARM. Between all the architectures, this is currently the most sensible choice for a RISC build.
- Must be supported by a standard Debian build. Many ARM SBCs have come and gone, and lots of them are relegated to the dustbin of history because they needed closed source drivers and the manufacturer stopped supporting them.
- Must have at least one PCIE lane. Modern hardware architecture is dependent on PCIE. having access to PCIE means hardware options. For my purposes, this gives me some flexibility in implementing a storage controller.
- Must be low-power This coincides with being $$ cheap, because Ampere-based solutions and fancy ARM workstations are very expensive and have arguably not-so-great power consumption metrics.
- Must have gigabit networking. Technically, 100Mbit is enough to stream 1080p quite easily, but for some operations, the throughput of gigabit is nice.
So we are left with SBCs and some kind of break-out for SATA.
Radxa has been impressing us lately with an huge array of SBC boards that fit the above. And if Jeff Geerling is happy enough with their SBCs, we can put some faith in the quality of their gear. Radxa also sells a Penta SATA Hat
SATA, not SAS? 1 PCIE Lane? No 10gbe?
Yes.
- Gigabit is more than sufficient for 1080p content streaming. In fact, it will be fine over 100mbit ethernet.
- You can saturate a gigabit connection with a single SATA disk (roughly 110Mibps).
- You can certainly overwhelm a single PCIE 2.1 lane with a single SATA disk, but this is a moot point if you cannot push more than gigabit ethernet.
- SAS disks are now moving something in the order of 24Gbps with very advanced bus features. SAS is also power hungry and expensive.
There's a tendency of reviewers, even respectable ones, to insist that higher-spec is better, but that leads to technology paralysis always looking for the "perfect" solution. Plus, I'm not upgrading my infrastructure to 2.5gbe or 10gbe.
Pending Review
So now I wait for my Radxa Rock 2a and Radxa Penta SATA Hat. I'll be back with some fio numbers and probably some Rube-Goldberg hardware setup photos. This will likely involve some 3d printing at the local library.