How do you set up a server? Do you do any automation or do you just open up an SSH session and YOLO? Any containers? Is docker-compose enough for you or are you one of those unicorns who had no issues whatsoever with rootless Podman? Do you use any premade scripts or do you hand craft it all? What distro are you building on top of?
I’m currently in process of “building” my own server and I’m kinda wondering how “far” most people are going, where do y’all take any shortcuts, and what do you spend effort getting just right.
I’m a lazy piece of shit and containers give me cancer, so I just keep iptables aggressive and spin up whatever on an Ubuntu box that gets upgrades when I feel like wasting a weekend in my underwear.
An honest soul
I get paid to do shit with rigor; I don’t have the time, energy, or help to make something classy for funsies. I’m also kind of a grumpy old man such that while I’ll praise and embrace Python’s addition of f-strings which make life better in myriad ways, I eschew the worse laziness of the all the containers attitude that we see for deployment.
Maybe a day shall come when containers are truly less of a headache than just thinking shit through the first time, and I’ll begrudgingly adapt and grow, but that day ain’t today.
I use debian VMs and create rootless podman containers for everything. Here’s my collection so far.
I’m currently in the process of learning how to combine this with ansible… that would save me some time when migrating servers/instances.
Debian + nginx + docker (compose).
That’s usually enough for me. I have all my docker compose files in their respective containers in the home directory like
~/red-discordbot/docker-compose.yml
.The only headache I’ve dealt with are permissions because I have to run docker as root and it makes a lot of messy permissions in the home directories. I’ve been trying rootless docker earlier and it’s been great so far.
edit: I also use
rclone
for backups.raspberry pi, arch linux, docker-compose. I really need to look up ansible
About two years ago my set up had gotten out of control, as it will. Closet full of crap all running vms all poorly managed by chef. Different linux flavors everywhere.
Now its one big physical ubuntu box. Everything gets its own ubuntu VM. These days if I can’t do it in shell scripts and xml I’m annoyed. Anything fancier than that i’d better be getting paid. I document in markdown as i go and rsync the important stuff from each VM to an external every night. Something goes wrong i just burn the vm, copy paste it back together in a new one from the mkdocs site. Then get on with my day.
For personal Linux servers, I tend to run Debian or Ubuntu, with a pretty simple “base” setup that I just run through manually in my head.
- Setup my personal account.
- Upload my SSH keys.
- Configure the hostname (usually after something in Star Trek 🖖).
- Configure the /etc/hotss file.
- Make sure it is fully patched.
- Setup ZeroTier.
- Setup Telegraf to ship some metrics.
- Reboot.
I don’t automate any of this because I don’t see a whole of point in doing it.
Super interesting to me that you swap between Debian and Ubuntu. Is there any rhyme or reason to why you use one over the other?
I tend to prefer installing Debian on a server, but recently I did install Ubuntu’s recent LTS on a box because I was running into an issue with the latest version of Debian. I didn’t want to revert to an earlier version of Debian or spend a bunch of time figuring out the problem I was having with Python, so I opted to use Ubuntu, which worked.
Ubuntu is based on Debian, so it’s like using the same operating system, as far as I’m concerned.
I use NixOS on almost all my servers, with declarative configuration. I can also install my config in one command with NixOS-Anywhere
It allows me to improve my setup bit by bit without having to keep track of what I did on specific machines
Usually Debian as base, then ansible to setup openssh for accessandd for the longest time, I just ran docker-compose straight on bare metal, these days though, I prefer k3s.
I use the following procedure with ansible.
- Setup the server with the things I need for k3s to run
- Setup k3s
- Bootstrap and create all my services on k3s via ArgoCD
People like to diss running kubernetes on your personal servers, but once you have enough services running in your servers, managing them using docker compose is no longer cut it and kubernetes is the next logical step to go. Tools such as k9s makes navigating as kubernetes cluster a breeze.
I use Proxmoxn then stare at the dashboard realizing I have no practical use for a home lab
So i’m not alone. I am trying to better myself.
Up until now I’ve been using docker and mostly manually configuring by dumping docker compose files in /opt/whatever and calling it a day. Portainer is running, but I mainly use it for monitoring and occasionally admin tasks. Yesterday though, I spun up machine number 3 and I’m strongly considering setting up something better for provisioning/config. After it’s all set up right, it’s never been a big problem, but there are a couple of bits of initial with that are a bit of a pain (mostly hooking up wireguard, which I use as a tunnel for remote admin and off-site reverse proxying.
Salt is probably the strongest contender for me, though that’s just because I’ve got a bit of experience with it.
Sqlite where possible, nginx, linux, no containers. I hate containers.
I’m somewhere in between. I hated containers for a long time but now work a lot with Kubernetes for work.
For my personal projects I’ve always hated containers a lot. Once I started learning how to build them and build them well however I really started enjoying it.
Using others’ containers is always hit or miss because a lot of them are WAY bloated. I especially hate all the docker-compose files that come with some database included as if I’m dying to run a ton of containerized database servers. Usually the underlying software supports the Postgres I run on the host itself.
Proxmox, then create LXC for everything (moslty debian and a bit of alpine), no automation, full yolo, if it break I have backup (problems are for future me eh)
This.
Proxmox and then LXCs for anything I need.and yes - I cheat a bit, I use the excellent Proxmox scripts - https://tteck.github.io/Proxmox/ because I’m lazy like that haha
Mostly the same. Proxmox with several LXC, two of which are running docker. One for my multimedia, the other for my game servers.
I used to do the same, but nowadays I just run everything in docker, within a single lxc container on proxmox. Having to setup mono or similar every time I wanted to setup a game server or even jellyfin was annoying.
After many years of tinkering, I finally gave in and converted my whole stack over to UnRAID a few years ago. You know what? It’s awesome, and I wish I had done it sooner. It automates so many of the more tedious aspects of home server management. I work in IT, so for me it’s less about scratching the itch and more about having competent hosting of services I consider mission-critical. UnRAID lets me do that easily and effectively.
Most of my fun stuff is controlled through Docker and VMs via UnRAID, and I have a secondary external Linux server which handles some tasks I don’t want to saddle UnRAID with (PFSense, Adblocking, etc). The UnRAID server itself has 128GB RAM and dual XEON CPUs, so plenty of go for my home projects. I’m at 12TB right now but I was just on Amazon eyeing some 8TB drives…
Generally, it’s Proxmox, debían, then whatever is needed for what I’m spinning up. Usually Docker Compose.
Lately I’ve been playing some with Ansible, but it’s use is far from common for me right now.