Private Eyes
Building A Private Cloud
Last Updated 2025-10-21
They don't deserve your data. You know that already. What you probably don't know is that it's actually easy to opt out.
This is how I use a VPS as a remote homelab to run sensitive compute and projects with Docker, and securely ensure it's recoverable by backing up to S3. It's reproducible using a Docker Compose file, Terraform, and a couple of bash scripts.
Getting The VPS Up
When setting up this project, I first tried going with Digital Ocean, as I've always found their droplets in Sydney reliable. However, when I ran out of RAM with multiple Docker containers running all at once, I thought rather than paying a premium for RAM with Digital Ocean, it was a good opportunity to look for an alternative.
Meet FlowVPS. Prices are fair, set up is straightforward, and the speed is out of this world. I'm not kidding, the latency blew me away; it genuinely feels local. Every other box I’ve SSH’d into has been in Sydney, but this Melbourne based VPS responds like it’s on my desk.
I used stow to import my dotfiles to the machine, and set up brew and zsh
on the Ubuntu OS I set up; meaning that in about 10 minutes, I had entirely
replicated my MacOS terminal experience remotely on a different operating
system. Now if I could only get copy/paste working over SSH...
How I Access My Services
On my laptop, I am able to SSH into the VPS using my public/private keypair I've set up with a user on the VPS box.
I'm running autossh to bind ports from the VPS to my laptop's localhost,
meaning the applications work essentially as if they were running on a local
Docker container, and automatically connect on login.
This means that you don't need to expose ANY inbound ports on your VPS outside
of SSH. I've also heavily restricted the outbound ports for the Docker
containers by defining two Docker networks in Docker Compose and setting ufw
rules for them:
- br-internet: containers that need external access
- br-isolated: containers that must stay internal
This might seem overkill, but there is a good reason
why it might be
wise to lock down containers that might want to phone home for telemetry. I used
ufw with the following rules to get it going:
# Base rules
sudo ufw allow OpenSSH
sudo ufw default deny incoming
sudo ufw default allow outgoing
# Allow 'br-internet' (defined in Docker Compose) network (172.18.0.0/16) to access DNS, HTTP/S
sudo ufw route allow in on br-internet out on ens3 from 172.18.0.0/16 to any port 53 proto udp
sudo ufw route allow in on br-internet out on ens3 from 172.18.0.0/16 to any port 53 proto tcp
sudo ufw route allow in on br-internet out on ens3 from 172.18.0.0/16 to any port 80
sudo ufw route allow in on br-internet out on ens3 from 172.18.0.0/16 to any port 443
# Block 'br-isolated' (defined in Docker Compose file) network from internet
sudo ufw route deny in on br-isolated out on ens3 from 172.19.0.0/16 to any
# Allow host access to isolated containers
sudo ufw route allow in on br-isolated to 172.19.0.0/16What's Running?
Here are the services that are currently up on the box:
- Atuin: The best shell history command (fuzzy, persistent across devices, secure). But my shell history shouldn't be on a foreign server.
- RSS-Bridge: Creates RSS feeds for TikTok and Instagram accounts, so I don't need to buy into the algorithm to follow the few people I need to. This isn't particularly secure, as they use third party bridges to fetch the data, meaning those third parties see what you are accessing on the original service. However, I don't see a reliable way around this, and the overall net positive of avoiding the Instagram cesspool is a net positive.
- FreshRSS: A no nonsense RSS feed fetcher/reader with a clean webapp. I like it cause I can integrate it with newsboat; a great little RSS TUI.
- Firefly III: Personal budgeting management
- ConvertX: I've always felt uncomfortable uploading documents to free document conversion websites, and can't be bothered learning CLI tools and flags
- Mini QR: Relying on a third party to create and decode QR codes is a security vulnerability.
- Baikal: CalDAV/CardDAV for calendar and contacts.
- Stirling PDF: I can't recommend this one due to the aforementioned tracking pixel, but it works well.
Automated Backups
I have set up automated daily backups for data that needs to be persistent (think my Atuin database, RSS feed database, etc). This is using a cronjob that calls a shell script that every morning does the following:
- Turn off containers with volumes needing backups
- Mount the volumes on fake containers
- Tar/copy the volumes to the VPS
- Turn off the fake containers
- Use
rcloneto push the backups to the S3 bucket - Turn the real containers back on
The key trick for running this simplified script was to ensure all persistent data was located on Docker volumes, which provides uniformity in how you can backup the data with a simple for loop, rather than considering how each container might need its data backed up. Simplicity wins out here over availability... I don't need to keep my FreshRSS running at 3am in the morning for a few minutes.
How the VPS accesses the bucket is via an IAM user created alongside the S3 bucket using Terraform.
Infrastructure As Code
I've created a Github repo with the Docker Compose file, the terraform file, and a couple of scripts to help you get set up on your own VPS if you wanted to set up your own homelab using a similar system.
Conclusion
If this sounds like too much work, you can get 80% of the benefits with 5% of
the effort by spinning up the Docker containers using the docker-compose.yml
file on your local machine. This provides all the services, albeit without the
flexibility, security, and backup capabilities of what I've outlined in this
post.
git clone https://github.com/samuelstranges/public_homelab.git && cd public_homelab
# SET ENVIRONMENT VARIABLES
docker compose up -dEven as I'm writing this I'm considering other services I can add to my system. It has opened my mind to the benefits of the self hosted mindset. Again, they don't deserve my data, and now, they don't have it.