🏠 Go home.

Some random thoughts on improving self-hosting of software at home.

Published on

Self-hosting is hard enough and the business incentives bad enough that people have seemingly given up on this, in spite of the numerous costs of giving your data to companies designed to profit off of them1. Follow me down a rabbit-hole, for a few moments and let me indulge in a bit of mindful fun around this subject. Were I to be working on this problem full-time these are the four domains which I see as the most influential to make this a tractable concern.

Hosting Software: Decidedly Non-trivial

It's 2017 and it's still non-trivial to run persistent services on your home network. Systems like Sandstorm and FreeNAS improve on the state of the world here, but FreeNAS has a fairly limited selection of software it feels like, and Sandstorm makes some interesting architectural decisions which make hosting general-use software non-trivial2.

Docker tries to make inroads here, but only if you're already technically competent. Non-technical folks are entirely incapable of running dockerized services, even though Docker solves the distribution problem handily. Docker also doesn't solve the base-OS, security concerns, system updates, and the likes.

rpm-ostree does some cool things to solve the update distribution and application, giving users atomic updates and rollbacks of the system image. A combination of ostree and strong pins of Docker images (to the SHA256 of the build, not a tag) could provide a certain level of determinism to system images while still allowing easy iteration of service components out of sync with the ostree updates. Wrap this in a simple updater UI served on a host's admin panel and anyone can apply ostree updates to get new versions of their service components.

Serving Software from Home: Somehow Harder

ISPs block interesting ports. NAT punching/UPnP is unreliable. Configuring SOHO and consumer routers is hellish. Debugging this shit when it breaks takes a networking degree from a technical trade school.

I propose abandoning that world altogether, and using decentralized tech that can handle RPC calls, stepping past the network layer entirely. I see two easy solutions to this.

The first is the good old Tor Hidden Service. Clients would be expected to abstract this away, and .onion URLs don't solve the "Human Meaningful" corner of Zooko's Triangle. Something like Namecoin could be used to link a human meaningful name to a Tor Hidden Service but I'm unaware of any solution which currently does this, and I'm unaware of any interface which aims to make Namecoin useful to non-technical folks.

A simpler to implement, yet less fault tolerant solution would be a fleet of matrix.org homeservers which exist only to provide RPC between applications. Rooms would link a service and a user, as a persistent RPC channel. History could be aggressively trimmed by the clients or modifications of the homeserver, given that looking at old RPC calls is probably not useful, perhaps even undesireable. Making Matrix's axolotl based end-to-end encryption transparent to users would be a prerequisite here: if the server owners now have a full network view of RPC calls, it's not much better than the current state of the world. So maybe that's not more simple.

Recovering From Disaster: The hardest part

Software-as-a-Service providers usually have a team dedicated to making sure their users' data won't disappear in the case of a failed harddisk, meteor, or a mistyped command. I don't have a good answer to this except for incremental backups handled at the OS layer, storing data outside of Docker volume containers in a place where the OS can back them up. Restoring is non-trivial, verifying your backups work even more so since if you're only running a single host, you can't wipe and restore. Running a highly-available cluster of hosts is not something I would wish on anyone running a home-computing setup and makes the network/routing layer even more impossible to deal with. Offiste backups are also a problem which would need to be solved. I doubt folks would be interested in farming out harddisk/bandwidth to something like TahoeLAFS, and so some sort of centralized backup system would be required, and now regular users have to manage encryption keys, and we all know how that goes.

Sharing with Others: Solutions in Flight

A lot of this work, in my opinion, has been worked through the IndieWeb movement, the W3C social working groups, and the open web movement in general. Adoption of these protocols over alternative proprietary (or even non-standardized) protocols makes sharing somewhat future proof, given the standards have the weight of the W3C behind them.

Wrapping up

No one thinks this is an easy to solve problem, because it's not. The institutional and monetary incentives to solve these problems as one don't currently exist, because there's no money to be made. Users largely refuse to pay for software, users largely refuse to care about software, leaving the folks who do in a dangerous minority.

Footnotes:

1
Enumeration of which is left as an excercise to the reader.
2
https://sandstorm.io/how-it-works "Sandstorm is radically different from all other web app infrastructure today."

Respond to this note:

Ryan Rix is a computer infrastructure fanboy who dabbles in decentralized systems. Reach him on twitter as @rrrrrrrix, via email to ryan@whatthefuck.computer or on Facebook or on Matrix as @rrix:kickass.systems.